id
int64
12
1.07M
title
stringlengths
1
124
text
stringlengths
0
228k
paragraphs
list
abstract
stringlengths
0
123k
date_created
stringlengths
0
20
date_modified
stringlengths
20
20
templates
list
url
stringlengths
31
154
15,271
Information retrieval
Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other content-based indexing. Information retrieval is the science of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds. Automated information retrieval systems are used to reduce what has been called information overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents. Web search engines are the most visible IR applications. An information retrieval process begins when a user or searcher enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevance. An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. This ranking of results is a key difference of information retrieval searching compared to database searching. Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query. there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute The idea of using computers to search for relevant pieces of information was popularized in the article As We May Think by Vannevar Bush in 1945. It would appear that Bush was inspired by patents for a 'statistical machine' – filed by Emanuel Goldberg in the 1920s and 1930s – that searched for documents stored on film. The first description of a computer searching for information was described by Holmstrom in 1948, detailing an early mention of the Univac computer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedy, Desk Set. In the 1960s, the first large information retrieval research group was formed by Gerard Salton at Cornell. By the 1970s several different retrieval techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents). Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s. In 1992, the US Department of Defense along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale to huge corpora. The introduction of web search engines has boosted the need for very large scale retrieval systems even further. Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category): Methods/Techniques in which information retrieval techniques are employed include: For effectively retrieving relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model. The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall. All measures assume a ground truth notion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevance.
[ { "paragraph_id": 0, "text": "Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other content-based indexing. Information retrieval is the science of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds.", "title": "" }, { "paragraph_id": 1, "text": "Automated information retrieval systems are used to reduce what has been called information overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents. Web search engines are the most visible IR applications.", "title": "" }, { "paragraph_id": 2, "text": "An information retrieval process begins when a user or searcher enters a query into the system. Queries are formal statements of information needs, for example search strings in web search engines. In information retrieval a query does not uniquely identify a single object in the collection. Instead, several objects may match the query, perhaps with different degrees of relevance.", "title": "Overview" }, { "paragraph_id": 3, "text": "An object is an entity that is represented by information in a content collection or database. User queries are matched against the database information. However, as opposed to classical SQL queries of a database, in information retrieval the results returned may or may not match the query, so results are typically ranked. This ranking of results is a key difference of information retrieval searching compared to database searching.", "title": "Overview" }, { "paragraph_id": 4, "text": "Depending on the application the data objects may be, for example, text documents, images, audio, mind maps or videos. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates or metadata.", "title": "Overview" }, { "paragraph_id": 5, "text": "Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. The top ranking objects are then shown to the user. The process may then be iterated if the user wishes to refine the query.", "title": "Overview" }, { "paragraph_id": 6, "text": "there is ... a machine called the Univac ... whereby letters and figures are coded as a pattern of magnetic spots on a long steel tape. By this means the text of a document, preceded by its subject code symbol, can be recorded ... the machine ... automatically selects and types out those references which have been coded in any desired way at a rate of 120 words a minute", "title": "History" }, { "paragraph_id": 7, "text": "The idea of using computers to search for relevant pieces of information was popularized in the article As We May Think by Vannevar Bush in 1945. It would appear that Bush was inspired by patents for a 'statistical machine' – filed by Emanuel Goldberg in the 1920s and 1930s – that searched for documents stored on film. The first description of a computer searching for information was described by Holmstrom in 1948, detailing an early mention of the Univac computer. Automated information retrieval systems were introduced in the 1950s: one even featured in the 1957 romantic comedy, Desk Set. In the 1960s, the first large information retrieval research group was formed by Gerard Salton at Cornell. By the 1970s several different retrieval techniques had been shown to perform well on small text corpora such as the Cranfield collection (several thousand documents). Large-scale retrieval systems, such as the Lockheed Dialog system, came into use early in the 1970s.", "title": "History" }, { "paragraph_id": 8, "text": "In 1992, the US Department of Defense along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for evaluation of text retrieval methodologies on a very large text collection. This catalyzed research on methods that scale to huge corpora. The introduction of web search engines has boosted the need for very large scale retrieval systems even further.", "title": "History" }, { "paragraph_id": 9, "text": "Areas where information retrieval techniques are employed include (the entries are in alphabetical order within each category):", "title": "Applications" }, { "paragraph_id": 10, "text": "Methods/Techniques in which information retrieval techniques are employed include:", "title": "Applications" }, { "paragraph_id": 11, "text": "For effectively retrieving relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Each retrieval strategy incorporates a specific model for its document representation purposes. The picture on the right illustrates the relationship of some common models. In the picture, the models are categorized according to two dimensions: the mathematical basis and the properties of the model.", "title": "Model types" }, { "paragraph_id": 12, "text": "The evaluation of an information retrieval system' is the process of assessing how well a system meets the information needs of its users. In general, measurement considers a collection of documents to be searched and a search query. Traditional evaluation metrics, designed for Boolean retrieval or top-k retrieval, include precision and recall. All measures assume a ground truth notion of relevance: every document is known to be either relevant or non-relevant to a particular query. In practice, queries may be ill-posed and there may be different shades of relevance.", "title": " Performance and correctness measures" } ]
Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other content-based indexing. Information retrieval is the science of searching for information in a document, searching for documents themselves, and also searching for the metadata that describes data, and for databases of texts, images or sounds. Automated information retrieval systems are used to reduce what has been called information overload. An IR system is a software system that provides access to books, journals and other documents; it also stores and manages those documents. Web search engines are the most visible IR applications.
2001-11-22T12:17:27Z
2023-12-06T18:36:31Z
[ "Template:Div col end", "Template:Main", "Template:Annotated link", "Template:Reflist", "Template:Cite book", "Template:Commonscat", "Template:Information science", "Template:Rquote", "Template:Huh", "Template:Webarchive", "Template:Div col", "Template:Section link", "Template:Cite web", "Template:Authority control", "Template:Short description", "Template:Cite journal", "Template:Cite conference", "Template:Wikiquote", "Template:Anchor" ]
https://en.wikipedia.org/wiki/Information_retrieval
15,272
List of Italian-language poets
List of poets who wrote in Italian (or Italian dialects).
[ { "paragraph_id": 0, "text": "List of poets who wrote in Italian (or Italian dialects).", "title": "" } ]
List of poets who wrote in Italian.
2023-04-13T06:06:18Z
[ "Template:Short description", "Template:Dynamic list", "Template:Compact ToC", "Template:Reflist", "Template:Cite book", "Template:Lists of poets", "Template:Lists of people from Italy by profession" ]
https://en.wikipedia.org/wiki/List_of_Italian-language_poets
15,274
International Criminal Tribunal for the former Yugoslavia
The International Criminal Tribunal for the former Yugoslavia (ICTY) was a body of the United Nations that was established to prosecute the war crimes that had been committed during the Yugoslav Wars and to try their perpetrators. The tribunal was an ad hoc court located in The Hague, Netherlands. It was established by Resolution 827 of the United Nations Security Council, which was passed on 25 May 1993. It had jurisdiction over four clusters of crimes committed on the territory of the former Yugoslavia since 1991: grave breaches of the Geneva Conventions, violations of the laws or customs of war, genocide, and crimes against humanity. The maximum sentence that it could impose was life imprisonment. Various countries signed agreements with the United Nations to carry out custodial sentences. A total of 161 persons were indicted; the final indictments were issued in December 2004, the last of which were confirmed and unsealed in the spring of 2005. The final fugitive, Goran Hadžić, was arrested on 20 July 2011. The final judgment was issued on 29 November 2017 and the institution formally ceased to exist on 31 December 2017. Residual functions of the ICTY, including the oversight of sentences and consideration of any appeal proceedings initiated since 1 July 2013, are under the jurisdiction of a successor body, the International Residual Mechanism for Criminal Tribunals (IRMCT). United Nations Security Council Resolution 808 of 22 February 1993 decided that an "international tribunal shall be established for the prosecution of persons responsible for serious violations of international humanitarian law committed in the territory of the former Yugoslavia since 1991", and called on the Secretary-General to "submit for consideration by the Council ... a report on all aspects of this matter, including specific proposals and where appropriate options ... taking into account suggestions put forward in this regard by Member States". The Court was originally proposed by German Foreign Minister Klaus Kinkel. Resolution 827 of 25 May 1993 approved the S/25704 report of the Secretary-General and adopted the Statute of the International Tribunal annexed to it, formally creating the ICTY. It was to have jurisdiction over four clusters of crimes committed on the territory of the former SFR Yugoslavia since 1991: The maximum sentence the ICTY could impose for these crimes was life imprisonment. In 1993 the internal infrastructure of the ICTY was built. 17 states had signed an agreement with the ICTY to carry out custodial sentences. 1993–1994: In the first year of its existence, the Tribunal laid the foundations for its existence as a judicial organ. It established the legal framework for its operations by adopting the rules of procedure and evidence, as well as its rules of detention and directive for the assignment of defence counsel. Together, these rules established a legal aid system for the Tribunal. As the ICTY was a part of the United Nations and was the first international court for criminal justice, the development of a juridical infrastructure was considered quite a challenge. However, after the first year, the first ICTY judges had drafted and adopted all the rules for court proceedings. 1994–1995: The ICTY established its offices within the Aegon Insurance Building in The Hague (which was, at the time, still partially in use by Aegon) and detention facilities in Scheveningen in The Hague (the Netherlands). The ICTY hired many staff members and by July 1994, the Office of the Prosecutor had sufficient staff to begin field investigations. By November 1994, the first indictments were presented to the Court and confirmed, and in 1995, the staff numbered over 200 persons from all over the world. In 1994 the first indictment was issued against the Bosnian-Serb concentration camp commander Dragan Nikolić. This was followed on 13 February 1995 by two indictments comprising 21 individuals which were issued against a group of 21 Bosnian-Serbs charged with committing atrocities against Muslim and Croat civilian prisoners. While the war in the former Yugoslavia was still raging, the ICTY prosecutors showed that an international court was viable. However, no accused was arrested. The court confirmed eight indictments against 46 individuals and issued arrest warrants. Bosnian Serb indictee Duško Tadić became the subject of the Tribunal's first trial. Tadić was arrested by German police in Munich in 1994 for his alleged actions in the Prijedor region in Bosnia-Herzegovina (especially his actions in the Omarska, Trnopolje and Keraterm detention camps). He made his first appearance before the ICTY Trial Chamber on 26 April 1995, and pleaded not guilty to all of the charges in the indictment. 1995–1996: Between June 1995 and June 1996, 10 public indictments had been confirmed against a total of 33 individuals. Six of the newly indicted persons were transferred in the Tribunal's detention unit. In addition to Duško Tadic, by June 1996 the tribunal had Tihomir Blaškić, Dražen Erdemović, Zejnil Delalić, Zdravko Mucić, Esad Landžo and Hazim Delić in custody. Erdemović became the first person to enter a guilty plea before the tribunal's court. Between 1995 and 1996, the ICTY dealt with miscellaneous cases involving several detainees, which never reached the trial stage. The Tribunal indicted 161 individuals between 1997 and 2004 and completed proceedings with them as follows: The indictees ranged from common soldiers to generals and police commanders all the way to prime ministers. Slobodan Milošević was the first sitting head of state indicted for war crimes. Other "high level" indictees included Milan Babić, former President of the Republika Srpska Krajina; Ramush Haradinaj, former Prime Minister of Kosovo; Radovan Karadžić, former President of the Republika Srpska; Ratko Mladić, former Commander of the Bosnian Serb Army; and Ante Gotovina (acquitted), former General of the Croatian Army. The very first hearing at the ICTY was a referral request in the Tadić case on 8 November 1994. Croat Serb General and former President of the Republic of Serbian Krajina Goran Hadžić was the last fugitive wanted by the Tribunal to be arrested on 20 July 2011. An additional 23 individuals have been the subject of contempt proceedings. In 2004, the ICTY published a list of five accomplishments "in justice and law": The United Nations Security Council passed resolutions 1503 in August 2003 and 1534 in March 2004, which both called for the completion of all cases at both the ICTY and its sister tribunal, the International Criminal Tribunal for Rwanda (ICTR) by 2010. In December 2010, the Security Council adopted Resolution 1966, which established the International Residual Mechanism for Criminal Tribunals (IRMCT), a body intended to gradually assume residual functions from both the ICTY and the ICTR as they wound down their mandate. Resolution 1966 called upon the Tribunal to finish its work by 31 December 2014 to prepare for its closure and the transfer of its responsibilities. In a Completion Strategy Report issued in May 2011, the ICTY indicated that it aimed to complete all trials by the end of 2012 and complete all appeals by 2015, with the exception of Radovan Karadžić whose trial was expected to end in 2014 and Ratko Mladić and Goran Hadžić, who were still at large at that time and were not arrested until later that year. The IRMCT's ICTY branch began functioning on 1 July 2013. Per the Transitional Arrangements adopted by the UN Security Council, the ICTY was to conduct and complete all outstanding first-instance trials, including those of Karadžić, Mladić and Hadžić. The ICTY would also conduct and complete all appeal proceedings for which the notice of appeal against the judgement or sentence was filed before 1 July 2013. The IRMCT will handle any appeals for which notice is filed after that date. The final ICTY trial to be completed in the first instance was that of Ratko Mladić, who was convicted on 22 November 2017. The final case to be considered by the ICTY was an appeal proceeding encompassing six individuals, whose sentences were upheld on 29 November 2017. While operating, the Tribunal employed around 900 staff. Its organisational components were Chambers, Registry and the Office of the Prosecutor (OTP). The Prosecutor was responsible for investigating crimes, gathering evidence and prosecutions and was head of the Office of the Prosecutor (OTP). The Prosecutor was appointed by the UN Security Council upon nomination by the UN Secretary-General. The last prosecutor was Serge Brammertz. Previous Prosecutors have been Ramón Escovar Salom of Venezuela (1993–1994), however, he never took up that office, Richard Goldstone of South Africa (1994–1996), Louise Arbour of Canada (1996–1999), and Carla Del Ponte of Switzerland (1999–2007). Richard Goldstone, Louise Arbour and Carla Del Ponte also simultaneously served as the Prosecutor of the International Criminal Tribunal for Rwanda until 2003. Graham Blewitt of Australia served as the Deputy Prosecutor from 1994 until 2004. David Tolbert, the President of the International Center for Transitional Justice, was also appointed Deputy Prosecutor of the ICTY in 2004. Chambers encompassed the judges and their aides. The Tribunal operated three Trial Chambers and one Appeals Chamber. The President of the Tribunal was also the presiding Judge of the Appeals Chamber. At the time of the court's dissolution, there were seven permanent judges and one ad hoc judge who served on the Tribunal. A total of 86 judges have been appointed to the Tribunal from 52 United Nations member states. Of those judges, 51 were permanent judges, 36 were ad litem judges, and one was an ad hoc judge. Note that one judge served as both a permanent and ad litem judge, and another served as both a permanent and ad hoc judge. UN member and observer states could each submit up to two nominees of different nationalities to the UN Secretary-General. The UN Secretary-General submitted this list to the UN Security Council which selected from 28 to 42 nominees and submitted these nominees to the UN General Assembly. The UN General Assembly then elected 14 judges from that list. Judges served for four years and were eligible for re-election. The UN Secretary-General appointed replacements in case of vacancy for the remainder of the term of office concerned. On 21 October 2015, Judge Carmel Agius of Malta was elected President of the ICTY and Liu Daqun of China was elected Vice-President; they assumed their positions on 17 November 2015. His predecessors were Antonio Cassese of Italy (1993–1997), Gabrielle Kirk McDonald of the United States (1997–1999), Claude Jorda of France (1999–2002), Theodor Meron of the United States (2002–2005), Fausto Pocar of Italy (2005–2008), Patrick Robinson of Jamaica (2008–2011), and Theodor Meron (2011–2015). The Registry was responsible for handling the administration of the Tribunal; activities included keeping court records, translating court documents, transporting and accommodating those who appear to testify, operating the Public Information Section, and such general duties as payroll administration, personnel management and procurement. It was also responsible for the Detention Unit for indictees being held during their trial and the Legal Aid program for indictees who cannot pay for their own defence. It was headed by the Registrar, a position occupied over the years by Theo van Boven of the Netherlands (February 1994 to December 1994), Dorothée de Sampayo Garrido-Nijgh of the Netherlands (1995–2000), Hans Holthuis of the Netherlands (2001–2009), and John Hocking of Australia (May 2009 to December 2017). Those defendants on trial and those who were denied a provisional release were detained at the United Nations Detention Unit on the premises of the Penitentiary Institution Haaglanden, location Scheveningen in Belgisch Park, a suburb of The Hague, located some 3 km by road from the courthouse. The indicted were housed in private cells which had a toilet, shower, radio, satellite TV, personal computer (without internet access) and other luxuries. They were allowed to phone family and friends daily and could have conjugal visits. There was also a library, a gym and various rooms used for religious observances. The inmates were allowed to cook for themselves. All of the inmates mixed freely and were not segregated on the basis of nationality. As the cells were more akin to a university residence instead of a jail, some had derisively referred to the ICT as the "Hague Hilton". The reason for this luxury relative to other prisons is that the first president of the court wanted to emphasise that the indictees were innocent until proven guilty. Criticisms of the court include:
[ { "paragraph_id": 0, "text": "The International Criminal Tribunal for the former Yugoslavia (ICTY) was a body of the United Nations that was established to prosecute the war crimes that had been committed during the Yugoslav Wars and to try their perpetrators. The tribunal was an ad hoc court located in The Hague, Netherlands.", "title": "" }, { "paragraph_id": 1, "text": "It was established by Resolution 827 of the United Nations Security Council, which was passed on 25 May 1993. It had jurisdiction over four clusters of crimes committed on the territory of the former Yugoslavia since 1991: grave breaches of the Geneva Conventions, violations of the laws or customs of war, genocide, and crimes against humanity. The maximum sentence that it could impose was life imprisonment. Various countries signed agreements with the United Nations to carry out custodial sentences.", "title": "" }, { "paragraph_id": 2, "text": "A total of 161 persons were indicted; the final indictments were issued in December 2004, the last of which were confirmed and unsealed in the spring of 2005. The final fugitive, Goran Hadžić, was arrested on 20 July 2011. The final judgment was issued on 29 November 2017 and the institution formally ceased to exist on 31 December 2017.", "title": "" }, { "paragraph_id": 3, "text": "Residual functions of the ICTY, including the oversight of sentences and consideration of any appeal proceedings initiated since 1 July 2013, are under the jurisdiction of a successor body, the International Residual Mechanism for Criminal Tribunals (IRMCT).", "title": "" }, { "paragraph_id": 4, "text": "United Nations Security Council Resolution 808 of 22 February 1993 decided that an \"international tribunal shall be established for the prosecution of persons responsible for serious violations of international humanitarian law committed in the territory of the former Yugoslavia since 1991\", and called on the Secretary-General to \"submit for consideration by the Council ... a report on all aspects of this matter, including specific proposals and where appropriate options ... taking into account suggestions put forward in this regard by Member States\".", "title": "History" }, { "paragraph_id": 5, "text": "The Court was originally proposed by German Foreign Minister Klaus Kinkel.", "title": "History" }, { "paragraph_id": 6, "text": "Resolution 827 of 25 May 1993 approved the S/25704 report of the Secretary-General and adopted the Statute of the International Tribunal annexed to it, formally creating the ICTY. It was to have jurisdiction over four clusters of crimes committed on the territory of the former SFR Yugoslavia since 1991:", "title": "History" }, { "paragraph_id": 7, "text": "The maximum sentence the ICTY could impose for these crimes was life imprisonment.", "title": "History" }, { "paragraph_id": 8, "text": "In 1993 the internal infrastructure of the ICTY was built. 17 states had signed an agreement with the ICTY to carry out custodial sentences.", "title": "History" }, { "paragraph_id": 9, "text": "1993–1994: In the first year of its existence, the Tribunal laid the foundations for its existence as a judicial organ. It established the legal framework for its operations by adopting the rules of procedure and evidence, as well as its rules of detention and directive for the assignment of defence counsel. Together, these rules established a legal aid system for the Tribunal. As the ICTY was a part of the United Nations and was the first international court for criminal justice, the development of a juridical infrastructure was considered quite a challenge. However, after the first year, the first ICTY judges had drafted and adopted all the rules for court proceedings.", "title": "History" }, { "paragraph_id": 10, "text": "1994–1995: The ICTY established its offices within the Aegon Insurance Building in The Hague (which was, at the time, still partially in use by Aegon) and detention facilities in Scheveningen in The Hague (the Netherlands). The ICTY hired many staff members and by July 1994, the Office of the Prosecutor had sufficient staff to begin field investigations. By November 1994, the first indictments were presented to the Court and confirmed, and in 1995, the staff numbered over 200 persons from all over the world.", "title": "History" }, { "paragraph_id": 11, "text": "In 1994 the first indictment was issued against the Bosnian-Serb concentration camp commander Dragan Nikolić. This was followed on 13 February 1995 by two indictments comprising 21 individuals which were issued against a group of 21 Bosnian-Serbs charged with committing atrocities against Muslim and Croat civilian prisoners. While the war in the former Yugoslavia was still raging, the ICTY prosecutors showed that an international court was viable. However, no accused was arrested.", "title": "History" }, { "paragraph_id": 12, "text": "The court confirmed eight indictments against 46 individuals and issued arrest warrants. Bosnian Serb indictee Duško Tadić became the subject of the Tribunal's first trial. Tadić was arrested by German police in Munich in 1994 for his alleged actions in the Prijedor region in Bosnia-Herzegovina (especially his actions in the Omarska, Trnopolje and Keraterm detention camps). He made his first appearance before the ICTY Trial Chamber on 26 April 1995, and pleaded not guilty to all of the charges in the indictment.", "title": "History" }, { "paragraph_id": 13, "text": "1995–1996: Between June 1995 and June 1996, 10 public indictments had been confirmed against a total of 33 individuals. Six of the newly indicted persons were transferred in the Tribunal's detention unit. In addition to Duško Tadic, by June 1996 the tribunal had Tihomir Blaškić, Dražen Erdemović, Zejnil Delalić, Zdravko Mucić, Esad Landžo and Hazim Delić in custody. Erdemović became the first person to enter a guilty plea before the tribunal's court. Between 1995 and 1996, the ICTY dealt with miscellaneous cases involving several detainees, which never reached the trial stage.", "title": "History" }, { "paragraph_id": 14, "text": "The Tribunal indicted 161 individuals between 1997 and 2004 and completed proceedings with them as follows:", "title": "History" }, { "paragraph_id": 15, "text": "The indictees ranged from common soldiers to generals and police commanders all the way to prime ministers. Slobodan Milošević was the first sitting head of state indicted for war crimes. Other \"high level\" indictees included Milan Babić, former President of the Republika Srpska Krajina; Ramush Haradinaj, former Prime Minister of Kosovo; Radovan Karadžić, former President of the Republika Srpska; Ratko Mladić, former Commander of the Bosnian Serb Army; and Ante Gotovina (acquitted), former General of the Croatian Army.", "title": "History" }, { "paragraph_id": 16, "text": "The very first hearing at the ICTY was a referral request in the Tadić case on 8 November 1994. Croat Serb General and former President of the Republic of Serbian Krajina Goran Hadžić was the last fugitive wanted by the Tribunal to be arrested on 20 July 2011.", "title": "History" }, { "paragraph_id": 17, "text": "An additional 23 individuals have been the subject of contempt proceedings.", "title": "History" }, { "paragraph_id": 18, "text": "In 2004, the ICTY published a list of five accomplishments \"in justice and law\":", "title": "History" }, { "paragraph_id": 19, "text": "The United Nations Security Council passed resolutions 1503 in August 2003 and 1534 in March 2004, which both called for the completion of all cases at both the ICTY and its sister tribunal, the International Criminal Tribunal for Rwanda (ICTR) by 2010.", "title": "History" }, { "paragraph_id": 20, "text": "In December 2010, the Security Council adopted Resolution 1966, which established the International Residual Mechanism for Criminal Tribunals (IRMCT), a body intended to gradually assume residual functions from both the ICTY and the ICTR as they wound down their mandate. Resolution 1966 called upon the Tribunal to finish its work by 31 December 2014 to prepare for its closure and the transfer of its responsibilities.", "title": "History" }, { "paragraph_id": 21, "text": "In a Completion Strategy Report issued in May 2011, the ICTY indicated that it aimed to complete all trials by the end of 2012 and complete all appeals by 2015, with the exception of Radovan Karadžić whose trial was expected to end in 2014 and Ratko Mladić and Goran Hadžić, who were still at large at that time and were not arrested until later that year.", "title": "History" }, { "paragraph_id": 22, "text": "The IRMCT's ICTY branch began functioning on 1 July 2013. Per the Transitional Arrangements adopted by the UN Security Council, the ICTY was to conduct and complete all outstanding first-instance trials, including those of Karadžić, Mladić and Hadžić. The ICTY would also conduct and complete all appeal proceedings for which the notice of appeal against the judgement or sentence was filed before 1 July 2013. The IRMCT will handle any appeals for which notice is filed after that date.", "title": "History" }, { "paragraph_id": 23, "text": "The final ICTY trial to be completed in the first instance was that of Ratko Mladić, who was convicted on 22 November 2017. The final case to be considered by the ICTY was an appeal proceeding encompassing six individuals, whose sentences were upheld on 29 November 2017.", "title": "History" }, { "paragraph_id": 24, "text": "While operating, the Tribunal employed around 900 staff. Its organisational components were Chambers, Registry and the Office of the Prosecutor (OTP).", "title": "Organization" }, { "paragraph_id": 25, "text": "The Prosecutor was responsible for investigating crimes, gathering evidence and prosecutions and was head of the Office of the Prosecutor (OTP). The Prosecutor was appointed by the UN Security Council upon nomination by the UN Secretary-General.", "title": "Organization" }, { "paragraph_id": 26, "text": "The last prosecutor was Serge Brammertz. Previous Prosecutors have been Ramón Escovar Salom of Venezuela (1993–1994), however, he never took up that office, Richard Goldstone of South Africa (1994–1996), Louise Arbour of Canada (1996–1999), and Carla Del Ponte of Switzerland (1999–2007). Richard Goldstone, Louise Arbour and Carla Del Ponte also simultaneously served as the Prosecutor of the International Criminal Tribunal for Rwanda until 2003. Graham Blewitt of Australia served as the Deputy Prosecutor from 1994 until 2004. David Tolbert, the President of the International Center for Transitional Justice, was also appointed Deputy Prosecutor of the ICTY in 2004.", "title": "Organization" }, { "paragraph_id": 27, "text": "Chambers encompassed the judges and their aides. The Tribunal operated three Trial Chambers and one Appeals Chamber. The President of the Tribunal was also the presiding Judge of the Appeals Chamber.", "title": "Organization" }, { "paragraph_id": 28, "text": "At the time of the court's dissolution, there were seven permanent judges and one ad hoc judge who served on the Tribunal. A total of 86 judges have been appointed to the Tribunal from 52 United Nations member states. Of those judges, 51 were permanent judges, 36 were ad litem judges, and one was an ad hoc judge. Note that one judge served as both a permanent and ad litem judge, and another served as both a permanent and ad hoc judge.", "title": "Organization" }, { "paragraph_id": 29, "text": "UN member and observer states could each submit up to two nominees of different nationalities to the UN Secretary-General. The UN Secretary-General submitted this list to the UN Security Council which selected from 28 to 42 nominees and submitted these nominees to the UN General Assembly. The UN General Assembly then elected 14 judges from that list. Judges served for four years and were eligible for re-election. The UN Secretary-General appointed replacements in case of vacancy for the remainder of the term of office concerned.", "title": "Organization" }, { "paragraph_id": 30, "text": "On 21 October 2015, Judge Carmel Agius of Malta was elected President of the ICTY and Liu Daqun of China was elected Vice-President; they assumed their positions on 17 November 2015. His predecessors were Antonio Cassese of Italy (1993–1997), Gabrielle Kirk McDonald of the United States (1997–1999), Claude Jorda of France (1999–2002), Theodor Meron of the United States (2002–2005), Fausto Pocar of Italy (2005–2008), Patrick Robinson of Jamaica (2008–2011), and Theodor Meron (2011–2015).", "title": "Organization" }, { "paragraph_id": 31, "text": "The Registry was responsible for handling the administration of the Tribunal; activities included keeping court records, translating court documents, transporting and accommodating those who appear to testify, operating the Public Information Section, and such general duties as payroll administration, personnel management and procurement. It was also responsible for the Detention Unit for indictees being held during their trial and the Legal Aid program for indictees who cannot pay for their own defence. It was headed by the Registrar, a position occupied over the years by Theo van Boven of the Netherlands (February 1994 to December 1994), Dorothée de Sampayo Garrido-Nijgh of the Netherlands (1995–2000), Hans Holthuis of the Netherlands (2001–2009), and John Hocking of Australia (May 2009 to December 2017).", "title": "Organization" }, { "paragraph_id": 32, "text": "Those defendants on trial and those who were denied a provisional release were detained at the United Nations Detention Unit on the premises of the Penitentiary Institution Haaglanden, location Scheveningen in Belgisch Park, a suburb of The Hague, located some 3 km by road from the courthouse. The indicted were housed in private cells which had a toilet, shower, radio, satellite TV, personal computer (without internet access) and other luxuries. They were allowed to phone family and friends daily and could have conjugal visits. There was also a library, a gym and various rooms used for religious observances. The inmates were allowed to cook for themselves. All of the inmates mixed freely and were not segregated on the basis of nationality. As the cells were more akin to a university residence instead of a jail, some had derisively referred to the ICT as the \"Hague Hilton\". The reason for this luxury relative to other prisons is that the first president of the court wanted to emphasise that the indictees were innocent until proven guilty.", "title": "Organization" }, { "paragraph_id": 33, "text": "Criticisms of the court include:", "title": "Controversies" } ]
The International Criminal Tribunal for the former Yugoslavia (ICTY) was a body of the United Nations that was established to prosecute the war crimes that had been committed during the Yugoslav Wars and to try their perpetrators. The tribunal was an ad hoc court located in The Hague, Netherlands. It was established by Resolution 827 of the United Nations Security Council, which was passed on 25 May 1993. It had jurisdiction over four clusters of crimes committed on the territory of the former Yugoslavia since 1991: grave breaches of the Geneva Conventions, violations of the laws or customs of war, genocide, and crimes against humanity. The maximum sentence that it could impose was life imprisonment. Various countries signed agreements with the United Nations to carry out custodial sentences. A total of 161 persons were indicted; the final indictments were issued in December 2004, the last of which were confirmed and unsealed in the spring of 2005. The final fugitive, Goran Hadžić, was arrested on 20 July 2011. The final judgment was issued on 29 November 2017 and the institution formally ceased to exist on 31 December 2017. Residual functions of the ICTY, including the oversight of sentences and consideration of any appeal proceedings initiated since 1 July 2013, are under the jurisdiction of a successor body, the International Residual Mechanism for Criminal Tribunals (IRMCT).
2001-11-24T15:35:10Z
2023-12-21T12:57:35Z
[ "Template:Dts", "Template:Criticism section", "Template:Who", "Template:Cn", "Template:Cite book", "Template:Refbegin", "Template:Infobox high court", "Template:Efn", "Template:Flag", "Template:Portal", "Template:Notelist", "Template:Cite web", "Template:Cite journal", "Template:Refend", "Template:Use dmy dates", "Template:Jus in bello", "Template:Cite news", "Template:Commons category", "Template:International criminal law", "Template:Short description", "Template:Main", "Template:Sortname", "Template:Clarify", "Template:Reflist", "Template:Official website", "Template:Authority control" ]
https://en.wikipedia.org/wiki/International_Criminal_Tribunal_for_the_former_Yugoslavia
15,275
ISO 216
ISO 216 is an international standard for paper sizes, used around the world except in North America and parts of Latin America. The standard defines the "A", "B" and "C" series of paper sizes, including A4, the most commonly available paper size worldwide. Two supplementary standards, ISO 217 and ISO 269, define related paper sizes; the ISO 269 "C" series is commonly listed alongside the A and B sizes. All ISO 216, ISO 217 and ISO 269 paper sizes (except some envelopes) have the same aspect ratio, √2:1, within rounding to millimetres. This ratio has the unique property that when cut or folded in half widthways, the halves also have the same aspect ratio. Each ISO paper size is one half of the area of the next larger size in the same series. The oldest known mention of the advantages of basing a paper size on an aspect ratio of √2 is found in a letter written on 25 October 1786 by the German scientist Georg Christoph Lichtenberg to Johann Beckmann. The formats that became ISO paper sizes A2, A3, B3, B4, and B5 were developed in France. They were listed in a 1798 law on taxation of publications that was based in part on page sizes. Searching for a standard system of paper formats on a scientific basis by the association Die Brücke, as a replacement for the vast variety of other paper formats that had been used before, in order to make paper stocking and document reproduction cheaper and more efficient, Wilhelm Ostwald proposed in 1911, over a hundred years after the “Loi sur le timbre”, a Weltformat (world format) for paper sizes based on the ratio 1:√2, referring to the argument advanced by Lichtenberg's 1786 letter, and linking this to the metric system by using 1 centimetre as the width of the base format. W. Porstmann argued in a long article published in 1918, that a firm basis for the system of paper formats, which deal with surfaces, could not be the length, but the surface, i.e. linking the system of paper formats to the metric system of measures by the square metre, using the two formulae of x : y :: 1 : √2 and x × y = 1. Porstmann also argued that formats for containers of paper like envelopes should be 10% larger than the paper format itself. In 1921, after a long discussion and another intervention by W. Porstmann, the Normenausschuß der deutschen Industrie (NADI, "Standardisation Committee of German Industry", today Deutsches Institut für Normung or short DIN) published German standard DI Norm 476 the specification of 4 series of paper formats with ratio 1:√2, with series A as the always preferred formats and basis for the other series. All measures are rounded to the nearest millimetre. A0 has a surface area of 1 square metre up to a rounding error, with a width of 841 mm and height of 1189 mm, so an actual area of 0.999949 m, and A4 recommended as standard paper size for business, administrative and government correspondence and A6 for postcards. Series B is based on B0 with width of 1 metre, C0 is 917 mm × 1297 mm, and D0 771 mm × 1090 mm. Series C is the basis for envelope formats. The DIN paper-format concept was soon introduced as a national standard in many other countries, for example, Belgium (1924), Netherlands (1925), Norway (1926), Switzerland (1929), Sweden (1930), Soviet Union (1934), Hungary (1938), Italy (1939), Finland (1942), Uruguay (1942), Argentina (1943), Brazil (1943), Spain (1947), Austria (1948), Romania (1949), Japan (1951), Denmark (1953), Czechoslovakia (1953), Israel (1954), Portugal (1954), Yugoslavia (1956), India (1957), Poland (1957), United Kingdom (1959), Venezuela (1962), New Zealand (1963), Iceland (1964), Mexico (1965), South Africa (1966), France (1967), Peru (1967), Turkey (1967), Chile (1968), Greece (1970), Zimbabwe (1970), Singapore (1970), Bangladesh (1972), Thailand (1973), Barbados (1973), Australia (1974), Ecuador (1974), Colombia (1975) and Kuwait (1975). It finally became both an international standard (ISO 216) as well as the official United Nations document format in 1975, and it is today used in almost all countries in the world, with the exception of USA, Canada, Mexico, Peru, Colombia, and the Dominican Republic. In 1977, a large German car manufacturer performed a study of the paper formats found in their incoming mail and concluded that out of 148 examined countries, 88 already used the A series formats. The main advantage of this system is its scaling. Rectangular paper with an aspect ratio of √2 has the unique property that, when cut or folded in half midway between its longer sides, each half has the same √2 aspect ratio as the whole sheet before it was divided. Equivalently, if one lays two same-sized sheets of paper with an aspect ratio of √2 side by side along their longer side, they form a larger rectangle with the aspect ratio of √2 and double the area of each individual sheet. The ISO system of paper sizes exploits these properties of the √2 aspect ratio. In each series of sizes (for example, series A), the largest size is numbered 0 (for example, A0), and each successive size (for example, A1, A2, etc.) has half the area of the preceding sheet and can be cut by halving the length of the preceding size sheet. The new measurement is rounded down to the nearest millimetre. A folded brochure can be made by using a sheet of the next larger size (for example, an A4 sheet is folded in half to make a brochure with size A5 pages. An office photocopier or printer can be designed to reduce a page from A4 to A5 or to enlarge a page from A4 to A3. Similarly, two sheets of A4 can be scaled down to fit one A4 sheet without excess empty paper. This system also simplifies calculating the weight of paper. Under ISO 536, paper's grammage is defined as a sheet's mass in grams (g) per area in square metres (unit symbol g/m; the nonstandard abbreviation "gsm" is also used). One can derive the grammage of other sizes by arithmetic division in g/m. A standard A4 sheet made from 80 g/m paper weighs 5 g, as it is 1/16 (four halvings, ignoring rounding) of an A0 page. Thus the weight, and the associated postage rate, can be easily approximated by counting the number of sheets used. ISO 216 and its related standards were first published between 1975 and 1995: Paper in the A series format has an aspect ratio of √2 (≈ 1.414, when rounded). A0 is defined so that it has an area of 1 m before rounding to the nearest millimetre. Successive paper sizes in the series (A1, A2, A3, etc.) are defined by halving the area of the preceding paper size and rounding down, so that the long side of A(n + 1) is the same length as the short side of An. Hence, each next size is nearly exactly half of the prior size. So, an A1 page can fit two A2 pages inside the same area. The most used of this series is the size A4, which is 210 mm × 297 mm (8.27 in × 11.7 in) and thus almost exactly 1⁄16 square metre (0.0625 m; 96.8752 sq in) in area. For comparison, the letter paper size commonly used in North America (8+1⁄2 in × 11 in; 216 mm × 279 mm) is about 6 mm (0.24 in) wider and 18 mm (0.71 in) shorter than A4. Then, the size of A5 paper is half of A4, as 148 mm × 210 mm (5.8 in × 8.3 in). The geometric rationale for using the square root of 2 is to maintain the aspect ratio of each subsequent rectangle after cutting or folding an A-series sheet in half, perpendicular to the larger side. Given a rectangle with a longer side, x, and a shorter side, y, ensuring that its aspect ratio, x/y, will be the same as that of a rectangle half its size, y/x/2, which means that x/y = y/x/2, which reduces to x/y = √2; in other words, an aspect ratio of 1:√2. The B series is defined in the standard as follows: "A subsidiary series of sizes is obtained by placing the geometrical means between adjacent sizes of the A series in sequence." The use of the geometric mean makes each step in size: B0, A0, B1, A1, B2 ... smaller than the previous one by the same factor. As with the A series, the lengths of the B series have the ratio √2, and folding one in half (and rounding down to the nearest millimetre) gives the next in the series. The shorter side of B0 is exactly 1 metre. There is also an incompatible Japanese B series which the JIS defines to have 1.5 times the area of the corresponding JIS A series (which is identical to the ISO A series). Thus, the lengths of JIS B series paper are √1.5 ≈ 1.22 times those of A-series paper. By comparison, the lengths of ISO B series paper are √2 ≈ 1.19 times those of A-series paper. The C series formats are geometric means between the B series and A series formats with the same number (e.g. C2 is the geometric mean between B2 and A2). The width to height ratio of C series formats is √2 as in the A and B series. A, B, and C series of paper fit together as part of a geometric progression, with ratio of successive side lengths of √2, though there is no size half-way between Bn and A(n − 1): A4, C4, B4, "D4", A3, ...; there is such a D-series in the Swedish extensions to the system. The lengths of ISO C series paper are therefore √2 ≈ 1.09 times those of A-series paper. The C series formats are used mainly for envelopes. An unfolded A4 page will fit into a C4 envelope. Due to same width to height ratio, if an A4 page is folded in half so that it is A5 in size, it will fit into a C5 envelope (which will be the same size as a C4 envelope folded in half). The tolerances specified in the standard are: These are related to comparison between series A, B and C. The ISO 216 formats are organized around the ratio 1:√2; two sheets next to each other together have the same ratio, sideways. In scaled photocopying, for example, two A4 sheets reduced to A5 size fit exactly onto one A4 sheet, and an A4 sheet in magnified size onto an A3 sheet; in each case, there is neither waste nor want. The principal countries not generally using the ISO paper sizes are the United States and Canada, which use North American paper sizes. Although they have also officially adopted the ISO 216 paper format, Mexico, Panama, Peru, Colombia, the Philippines, and Chile also use mostly U.S. paper sizes. Rectangular sheets of paper with the ratio 1:√2 are popular in paper folding, such as origami, where they are sometimes called "A4 rectangles" or "silver rectangles". In other contexts, the term "silver rectangle" can also refer to a rectangle in the proportion 1:(1 + √2), known as the silver ratio. An adjunct to the ISO paper sizes, particularly the A series, are the technical drawing line widths specified in ISO 128. For example, line type A ("Continuous - thick", used for "visible outlines") has a standard thickness of 0.7 mm on an A0-sized sheet, 0.5 mm on an A1 sheet, and 0.35 mm on A2, A3, or A4. The matching technical pen widths are 0.13, 0.18, 0.25, 0.35, 0.5, 0.7, 1.0, 1.40, and 2.0 mm, as specified in ISO 9175-1. Colour codes are assigned to each size to facilitate easy recognition by the drafter. These sizes again increase by a factor of √2, so that particular pens can be used on particular sizes of paper, and then the next smaller or larger size can be used to continue the drawing after it has been reduced or enlarged, respectively. The earlier DIN 6775 standard upon which ISO 9175-1 is based also specified a term and symbol for easy identification of pens and drawing templates compatible with the standard, called Micronorm, which may still be found on some technical drafting equipment.
[ { "paragraph_id": 0, "text": "ISO 216 is an international standard for paper sizes, used around the world except in North America and parts of Latin America. The standard defines the \"A\", \"B\" and \"C\" series of paper sizes, including A4, the most commonly available paper size worldwide. Two supplementary standards, ISO 217 and ISO 269, define related paper sizes; the ISO 269 \"C\" series is commonly listed alongside the A and B sizes.", "title": "" }, { "paragraph_id": 1, "text": "All ISO 216, ISO 217 and ISO 269 paper sizes (except some envelopes) have the same aspect ratio, √2:1, within rounding to millimetres. This ratio has the unique property that when cut or folded in half widthways, the halves also have the same aspect ratio. Each ISO paper size is one half of the area of the next larger size in the same series.", "title": "" }, { "paragraph_id": 2, "text": "The oldest known mention of the advantages of basing a paper size on an aspect ratio of √2 is found in a letter written on 25 October 1786 by the German scientist Georg Christoph Lichtenberg to Johann Beckmann.", "title": "History" }, { "paragraph_id": 3, "text": "The formats that became ISO paper sizes A2, A3, B3, B4, and B5 were developed in France. They were listed in a 1798 law on taxation of publications that was based in part on page sizes.", "title": "History" }, { "paragraph_id": 4, "text": "Searching for a standard system of paper formats on a scientific basis by the association Die Brücke, as a replacement for the vast variety of other paper formats that had been used before, in order to make paper stocking and document reproduction cheaper and more efficient, Wilhelm Ostwald proposed in 1911, over a hundred years after the “Loi sur le timbre”, a Weltformat (world format) for paper sizes based on the ratio 1:√2, referring to the argument advanced by Lichtenberg's 1786 letter, and linking this to the metric system by using 1 centimetre as the width of the base format. W. Porstmann argued in a long article published in 1918, that a firm basis for the system of paper formats, which deal with surfaces, could not be the length, but the surface, i.e. linking the system of paper formats to the metric system of measures by the square metre, using the two formulae of x : y :: 1 : √2 and x × y = 1. Porstmann also argued that formats for containers of paper like envelopes should be 10% larger than the paper format itself.", "title": "History" }, { "paragraph_id": 5, "text": "In 1921, after a long discussion and another intervention by W. Porstmann, the Normenausschuß der deutschen Industrie (NADI, \"Standardisation Committee of German Industry\", today Deutsches Institut für Normung or short DIN) published German standard DI Norm 476 the specification of 4 series of paper formats with ratio 1:√2, with series A as the always preferred formats and basis for the other series. All measures are rounded to the nearest millimetre. A0 has a surface area of 1 square metre up to a rounding error, with a width of 841 mm and height of 1189 mm, so an actual area of 0.999949 m, and A4 recommended as standard paper size for business, administrative and government correspondence and A6 for postcards. Series B is based on B0 with width of 1 metre, C0 is 917 mm × 1297 mm, and D0 771 mm × 1090 mm. Series C is the basis for envelope formats.", "title": "History" }, { "paragraph_id": 6, "text": "The DIN paper-format concept was soon introduced as a national standard in many other countries, for example, Belgium (1924), Netherlands (1925), Norway (1926), Switzerland (1929), Sweden (1930), Soviet Union (1934), Hungary (1938), Italy (1939), Finland (1942), Uruguay (1942), Argentina (1943), Brazil (1943), Spain (1947), Austria (1948), Romania (1949), Japan (1951), Denmark (1953), Czechoslovakia (1953), Israel (1954), Portugal (1954), Yugoslavia (1956), India (1957), Poland (1957), United Kingdom (1959), Venezuela (1962), New Zealand (1963), Iceland (1964), Mexico (1965), South Africa (1966), France (1967), Peru (1967), Turkey (1967), Chile (1968), Greece (1970), Zimbabwe (1970), Singapore (1970), Bangladesh (1972), Thailand (1973), Barbados (1973), Australia (1974), Ecuador (1974), Colombia (1975) and Kuwait (1975).", "title": "History" }, { "paragraph_id": 7, "text": "It finally became both an international standard (ISO 216) as well as the official United Nations document format in 1975, and it is today used in almost all countries in the world, with the exception of USA, Canada, Mexico, Peru, Colombia, and the Dominican Republic.", "title": "History" }, { "paragraph_id": 8, "text": "In 1977, a large German car manufacturer performed a study of the paper formats found in their incoming mail and concluded that out of 148 examined countries, 88 already used the A series formats.", "title": "History" }, { "paragraph_id": 9, "text": "The main advantage of this system is its scaling. Rectangular paper with an aspect ratio of √2 has the unique property that, when cut or folded in half midway between its longer sides, each half has the same √2 aspect ratio as the whole sheet before it was divided. Equivalently, if one lays two same-sized sheets of paper with an aspect ratio of √2 side by side along their longer side, they form a larger rectangle with the aspect ratio of √2 and double the area of each individual sheet.", "title": "Advantages" }, { "paragraph_id": 10, "text": "The ISO system of paper sizes exploits these properties of the √2 aspect ratio. In each series of sizes (for example, series A), the largest size is numbered 0 (for example, A0), and each successive size (for example, A1, A2, etc.) has half the area of the preceding sheet and can be cut by halving the length of the preceding size sheet. The new measurement is rounded down to the nearest millimetre. A folded brochure can be made by using a sheet of the next larger size (for example, an A4 sheet is folded in half to make a brochure with size A5 pages. An office photocopier or printer can be designed to reduce a page from A4 to A5 or to enlarge a page from A4 to A3. Similarly, two sheets of A4 can be scaled down to fit one A4 sheet without excess empty paper.", "title": "Advantages" }, { "paragraph_id": 11, "text": "This system also simplifies calculating the weight of paper. Under ISO 536, paper's grammage is defined as a sheet's mass in grams (g) per area in square metres (unit symbol g/m; the nonstandard abbreviation \"gsm\" is also used). One can derive the grammage of other sizes by arithmetic division in g/m. A standard A4 sheet made from 80 g/m paper weighs 5 g, as it is 1/16 (four halvings, ignoring rounding) of an A0 page. Thus the weight, and the associated postage rate, can be easily approximated by counting the number of sheets used.", "title": "Advantages" }, { "paragraph_id": 12, "text": "ISO 216 and its related standards were first published between 1975 and 1995:", "title": "Advantages" }, { "paragraph_id": 13, "text": "Paper in the A series format has an aspect ratio of √2 (≈ 1.414, when rounded). A0 is defined so that it has an area of 1 m before rounding to the nearest millimetre. Successive paper sizes in the series (A1, A2, A3, etc.) are defined by halving the area of the preceding paper size and rounding down, so that the long side of A(n + 1) is the same length as the short side of An. Hence, each next size is nearly exactly half of the prior size. So, an A1 page can fit two A2 pages inside the same area.", "title": "Properties" }, { "paragraph_id": 14, "text": "The most used of this series is the size A4, which is 210 mm × 297 mm (8.27 in × 11.7 in) and thus almost exactly 1⁄16 square metre (0.0625 m; 96.8752 sq in) in area. For comparison, the letter paper size commonly used in North America (8+1⁄2 in × 11 in; 216 mm × 279 mm) is about 6 mm (0.24 in) wider and 18 mm (0.71 in) shorter than A4. Then, the size of A5 paper is half of A4, as 148 mm × 210 mm (5.8 in × 8.3 in).", "title": "Properties" }, { "paragraph_id": 15, "text": "The geometric rationale for using the square root of 2 is to maintain the aspect ratio of each subsequent rectangle after cutting or folding an A-series sheet in half, perpendicular to the larger side. Given a rectangle with a longer side, x, and a shorter side, y, ensuring that its aspect ratio, x/y, will be the same as that of a rectangle half its size, y/x/2, which means that x/y = y/x/2, which reduces to x/y = √2; in other words, an aspect ratio of 1:√2.", "title": "Properties" }, { "paragraph_id": 16, "text": "The B series is defined in the standard as follows: \"A subsidiary series of sizes is obtained by placing the geometrical means between adjacent sizes of the A series in sequence.\" The use of the geometric mean makes each step in size: B0, A0, B1, A1, B2 ... smaller than the previous one by the same factor. As with the A series, the lengths of the B series have the ratio √2, and folding one in half (and rounding down to the nearest millimetre) gives the next in the series. The shorter side of B0 is exactly 1 metre.", "title": "Properties" }, { "paragraph_id": 17, "text": "There is also an incompatible Japanese B series which the JIS defines to have 1.5 times the area of the corresponding JIS A series (which is identical to the ISO A series). Thus, the lengths of JIS B series paper are √1.5 ≈ 1.22 times those of A-series paper. By comparison, the lengths of ISO B series paper are √2 ≈ 1.19 times those of A-series paper.", "title": "Properties" }, { "paragraph_id": 18, "text": "The C series formats are geometric means between the B series and A series formats with the same number (e.g. C2 is the geometric mean between B2 and A2). The width to height ratio of C series formats is √2 as in the A and B series. A, B, and C series of paper fit together as part of a geometric progression, with ratio of successive side lengths of √2, though there is no size half-way between Bn and A(n − 1): A4, C4, B4, \"D4\", A3, ...; there is such a D-series in the Swedish extensions to the system. The lengths of ISO C series paper are therefore √2 ≈ 1.09 times those of A-series paper.", "title": "Properties" }, { "paragraph_id": 19, "text": "The C series formats are used mainly for envelopes. An unfolded A4 page will fit into a C4 envelope. Due to same width to height ratio, if an A4 page is folded in half so that it is A5 in size, it will fit into a C5 envelope (which will be the same size as a C4 envelope folded in half).", "title": "Properties" }, { "paragraph_id": 20, "text": "The tolerances specified in the standard are:", "title": "Tolerances" }, { "paragraph_id": 21, "text": "These are related to comparison between series A, B and C.", "title": "Tolerances" }, { "paragraph_id": 22, "text": "The ISO 216 formats are organized around the ratio 1:√2; two sheets next to each other together have the same ratio, sideways. In scaled photocopying, for example, two A4 sheets reduced to A5 size fit exactly onto one A4 sheet, and an A4 sheet in magnified size onto an A3 sheet; in each case, there is neither waste nor want.", "title": "Application" }, { "paragraph_id": 23, "text": "The principal countries not generally using the ISO paper sizes are the United States and Canada, which use North American paper sizes. Although they have also officially adopted the ISO 216 paper format, Mexico, Panama, Peru, Colombia, the Philippines, and Chile also use mostly U.S. paper sizes.", "title": "Application" }, { "paragraph_id": 24, "text": "Rectangular sheets of paper with the ratio 1:√2 are popular in paper folding, such as origami, where they are sometimes called \"A4 rectangles\" or \"silver rectangles\". In other contexts, the term \"silver rectangle\" can also refer to a rectangle in the proportion 1:(1 + √2), known as the silver ratio.", "title": "Application" }, { "paragraph_id": 25, "text": "An adjunct to the ISO paper sizes, particularly the A series, are the technical drawing line widths specified in ISO 128. For example, line type A (\"Continuous - thick\", used for \"visible outlines\") has a standard thickness of 0.7 mm on an A0-sized sheet, 0.5 mm on an A1 sheet, and 0.35 mm on A2, A3, or A4.", "title": "Matching technical pen widths" }, { "paragraph_id": 26, "text": "The matching technical pen widths are 0.13, 0.18, 0.25, 0.35, 0.5, 0.7, 1.0, 1.40, and 2.0 mm, as specified in ISO 9175-1. Colour codes are assigned to each size to facilitate easy recognition by the drafter. These sizes again increase by a factor of √2, so that particular pens can be used on particular sizes of paper, and then the next smaller or larger size can be used to continue the drawing after it has been reduced or enlarged, respectively.", "title": "Matching technical pen widths" }, { "paragraph_id": 27, "text": "The earlier DIN 6775 standard upon which ISO 9175-1 is based also specified a term and symbol for easy identification of pens and drawing templates compatible with the standard, called Micronorm, which may still be found on some technical drafting equipment.", "title": "Matching technical pen widths" } ]
ISO 216 is an international standard for paper sizes, used around the world except in North America and parts of Latin America. The standard defines the "A", "B" and "C" series of paper sizes, including A4, the most commonly available paper size worldwide. Two supplementary standards, ISO 217 and ISO 269, define related paper sizes; the ISO 269 "C" series is commonly listed alongside the A and B sizes. All ISO 216, ISO 217 and ISO 269 paper sizes have the same aspect ratio, √2:1, within rounding to millimetres. This ratio has the unique property that when cut or folded in half widthways, the halves also have the same aspect ratio. Each ISO paper size is one half of the area of the next larger size in the same series.
2001-11-25T04:29:46Z
2023-12-01T09:18:23Z
[ "Template:Cite journal", "Template:Commons category", "Template:Use dmy dates", "Template:Math", "Template:Sfrac", "Template:Sup", "Template:Lead too short", "Template:Cvt", "Template:Convert", "Template:Cite book", "Template:Short description", "Template:Reflist", "Template:Cite web", "Template:Deutsches Institut für Normung", "Template:0", "Template:Redirect", "Template:Nowrap", "Template:ISO standards" ]
https://en.wikipedia.org/wiki/ISO_216
15,276
ISO 3864
ISO 3864 specifies international standards for safety signs and markings in workplaces and public facilities. These labels are graphical, to overcome language barriers. The standard is split into four parts. ISO 3864 consists of four parts, that provide more specific and situation specific guidance depending on the application. Part 1 explains how to layout the components of safety signage, dictate the color scheme and sizing information. Part 2 covers the same concepts as part one, but specifically for labels applied on machinery, vehicles and consumer goods. Part 3 contains guidance for designing new safety symbols. Part 4 specifies the standards for phosphorescent material and colours of a sign, as well as testing to confirm these signs meets required standards. These are the colours specified in ISO Standard 3864-4 in RAL colour standard. In addition, ISO standard 3864-2:2016 lays out the following colours that correspond to levels of risk. This standard adds "Orange" as an incremental colour to the pallette above. ISO 3864-3 defines four types of arrow designs, and specifies what situations each type should be used in. Part 1 also provides design standards for 'safety markings', which are safety colors combined with a contrasting color in an alternating 45° stripe pattern, intended to increase the visibility of an object, location or safety message. In addition to prescribing colours for safety signage, ISO 3864 also specifies how to layout the elements of the sign: A symbol and optional 'supplemental sign' which contains the supplementary text message. For situations where more than one message needs to be communicated, ISO 3864 also provides guidance for "multiple signs", which consist of two or more symbol and text messages combined into a single sign. Additionally, fire protection and safe condition signs, which mark the location of equipment or exits can be combined with an arrow to indicate the direction to the item depicted on the sign. The corresponding American standard is ANSI Z535. ANSI Z535.1 also explicitly uses multiple levels of hazard, including Yellow (Pantone 109) for 'caution' messages, and Orange (Pantone 151) for stronger 'warning' messages. Like ISO 3864, ANSI Z535 includes multiple sections: ANSI Z535.6-2006 defines an optional accompanying text in one or more languages. ISO 3864 is extended by ISO 7010, which provides a set of symbols based on the principles and properties specified in ISO 3864.
[ { "paragraph_id": 0, "text": "ISO 3864 specifies international standards for safety signs and markings in workplaces and public facilities. These labels are graphical, to overcome language barriers. The standard is split into four parts.", "title": "" }, { "paragraph_id": 1, "text": "ISO 3864 consists of four parts, that provide more specific and situation specific guidance depending on the application.", "title": "Parts" }, { "paragraph_id": 2, "text": "Part 1 explains how to layout the components of safety signage, dictate the color scheme and sizing information. Part 2 covers the same concepts as part one, but specifically for labels applied on machinery, vehicles and consumer goods. Part 3 contains guidance for designing new safety symbols. Part 4 specifies the standards for phosphorescent material and colours of a sign, as well as testing to confirm these signs meets required standards.", "title": "Parts" }, { "paragraph_id": 3, "text": "These are the colours specified in ISO Standard 3864-4 in RAL colour standard.", "title": "Components of ISO 3864" }, { "paragraph_id": 4, "text": "In addition, ISO standard 3864-2:2016 lays out the following colours that correspond to levels of risk. This standard adds \"Orange\" as an incremental colour to the pallette above.", "title": "Components of ISO 3864" }, { "paragraph_id": 5, "text": "ISO 3864-3 defines four types of arrow designs, and specifies what situations each type should be used in.", "title": "Components of ISO 3864" }, { "paragraph_id": 6, "text": "Part 1 also provides design standards for 'safety markings', which are safety colors combined with a contrasting color in an alternating 45° stripe pattern, intended to increase the visibility of an object, location or safety message.", "title": "Components of ISO 3864" }, { "paragraph_id": 7, "text": "In addition to prescribing colours for safety signage, ISO 3864 also specifies how to layout the elements of the sign: A symbol and optional 'supplemental sign' which contains the supplementary text message.", "title": "Components of ISO 3864" }, { "paragraph_id": 8, "text": "For situations where more than one message needs to be communicated, ISO 3864 also provides guidance for \"multiple signs\", which consist of two or more symbol and text messages combined into a single sign. Additionally, fire protection and safe condition signs, which mark the location of equipment or exits can be combined with an arrow to indicate the direction to the item depicted on the sign.", "title": "Components of ISO 3864" }, { "paragraph_id": 9, "text": "The corresponding American standard is ANSI Z535. ANSI Z535.1 also explicitly uses multiple levels of hazard, including Yellow (Pantone 109) for 'caution' messages, and Orange (Pantone 151) for stronger 'warning' messages. Like ISO 3864, ANSI Z535 includes multiple sections: ANSI Z535.6-2006 defines an optional accompanying text in one or more languages.", "title": "Related standards" }, { "paragraph_id": 10, "text": "ISO 3864 is extended by ISO 7010, which provides a set of symbols based on the principles and properties specified in ISO 3864.", "title": "Related standards" } ]
ISO 3864 specifies international standards for safety signs and markings in workplaces and public facilities. These labels are graphical, to overcome language barriers. The standard is split into four parts.
2001-11-25T05:45:35Z
2023-12-14T02:01:41Z
[ "Template:Short description", "Template:Infobox technology standard", "Template:Center", "Template:Efn", "Template:Notelist", "Template:Reflist", "Template:Cite web", "Template:ISO standards" ]
https://en.wikipedia.org/wiki/ISO_3864
15,281
Isaac Abendana
Isaac Abendana (c. 1640–1699) was the younger brother of Jacob Abendana, and became hakam of the Spanish Portuguese Synagogue in London after his brother died. Abendana moved to England before his brother, in 1662, and taught Hebrew at Cambridge University. He completed an unpublished Latin translation of the Mishnah for the university in 1671. While he was at Cambridge, Abendana sold Hebrew books to the Bodleian Library of Oxford, and in 1689 he took a teaching position in Magdalen College. In Oxford, he wrote a series of Jewish almanacs for Christians, which he later collected and compiled as the Discourses on the Ecclesiastical and Civil Polity of the Jews (1706). Like his brother, he maintained an extensive correspondence with leading Christian scholars of his time, most notably with the philosopher Ralph Cudworth, master of Christ's College, Cambridge.
[ { "paragraph_id": 0, "text": "Isaac Abendana (c. 1640–1699) was the younger brother of Jacob Abendana, and became hakam of the Spanish Portuguese Synagogue in London after his brother died.", "title": "" }, { "paragraph_id": 1, "text": "Abendana moved to England before his brother, in 1662, and taught Hebrew at Cambridge University. He completed an unpublished Latin translation of the Mishnah for the university in 1671.", "title": "" }, { "paragraph_id": 2, "text": "While he was at Cambridge, Abendana sold Hebrew books to the Bodleian Library of Oxford, and in 1689 he took a teaching position in Magdalen College. In Oxford, he wrote a series of Jewish almanacs for Christians, which he later collected and compiled as the Discourses on the Ecclesiastical and Civil Polity of the Jews (1706). Like his brother, he maintained an extensive correspondence with leading Christian scholars of his time, most notably with the philosopher Ralph Cudworth, master of Christ's College, Cambridge.", "title": "" }, { "paragraph_id": 3, "text": "", "title": "References" } ]
Isaac Abendana was the younger brother of Jacob Abendana, and became hakam of the Spanish Portuguese Synagogue in London after his brother died. Abendana moved to England before his brother, in 1662, and taught Hebrew at Cambridge University. He completed an unpublished Latin translation of the Mishnah for the university in 1671. While he was at Cambridge, Abendana sold Hebrew books to the Bodleian Library of Oxford, and in 1689 he took a teaching position in Magdalen College. In Oxford, he wrote a series of Jewish almanacs for Christians, which he later collected and compiled as the Discourses on the Ecclesiastical and Civil Polity of the Jews (1706). Like his brother, he maintained an extensive correspondence with leading Christian scholars of his time, most notably with the philosopher Ralph Cudworth, master of Christ's College, Cambridge.
2002-02-25T15:51:15Z
2023-07-31T22:28:06Z
[ "Template:Use dmy dates", "Template:Circa", "Template:Reflist", "Template:Cite ODNB", "Template:Wikisource1911Enc", "Template:Authority control", "Template:Judaism-bio-stub", "Template:Short description", "Template:About", "Template:One source", "Template:England-reli-bio-stub" ]
https://en.wikipedia.org/wiki/Isaac_Abendana
15,284
List of intelligence agencies
This is a list of intelligence agencies by country. It includes only currently operational institutions. An intelligence agency is a government agency responsible for the collection, analysis, and exploitation of information in support of law enforcement, national security, military, and foreign policy objectives. National Crime Intelligence Agency (NCIA) Central Committee of the Chinese Communist Party (CCCPC) People's Liberation Army (PLA) State Council of the People's Republic of China Internal and Foreign Intelligence Military Intelligence Communist Party of Cuba (PCC) Ministry of the FAR (MINFAR) Ministry of the Interior State Intelligence Services (the Gambia) (SIS) Military Intelligence Foreign & Domestic Military Intelligence (Defence Forces) Domestic Police Intelligence (Garda Síochána) Dipartimento delle Informazioni per la Sicurezza (DIS) - Department of Information for Security National Intelligence Coordination Committee (NICC) Ministry of Defense (MINDEF) Ministry of Home Affairs (MHA) National Security Bureau Domestic intelligence Foreign intelligence Signals intelligence Criminal Intelligence and Protected Persons
[ { "paragraph_id": 0, "text": "This is a list of intelligence agencies by country. It includes only currently operational institutions.", "title": "" }, { "paragraph_id": 1, "text": "An intelligence agency is a government agency responsible for the collection, analysis, and exploitation of information in support of law enforcement, national security, military, and foreign policy objectives.", "title": "" }, { "paragraph_id": 2, "text": "National Crime Intelligence Agency (NCIA)", "title": "Bahamas" }, { "paragraph_id": 3, "text": "Central Committee of the Chinese Communist Party (CCCPC)", "title": "People's Republic of China" }, { "paragraph_id": 4, "text": "People's Liberation Army (PLA)", "title": "People's Republic of China" }, { "paragraph_id": 5, "text": "State Council of the People's Republic of China", "title": "People's Republic of China" }, { "paragraph_id": 6, "text": "Internal and Foreign Intelligence", "title": "Croatia" }, { "paragraph_id": 7, "text": "Military Intelligence", "title": "Croatia" }, { "paragraph_id": 8, "text": "Communist Party of Cuba (PCC) Ministry of the FAR (MINFAR)", "title": "Cuba" }, { "paragraph_id": 9, "text": "Ministry of the Interior", "title": "Cuba" }, { "paragraph_id": 10, "text": "State Intelligence Services (the Gambia) (SIS)", "title": "Gambia" }, { "paragraph_id": 11, "text": "Military Intelligence", "title": "India" }, { "paragraph_id": 12, "text": "Foreign & Domestic Military Intelligence (Defence Forces)", "title": "Ireland" }, { "paragraph_id": 13, "text": "Domestic Police Intelligence (Garda Síochána)", "title": "Ireland" }, { "paragraph_id": 14, "text": "Dipartimento delle Informazioni per la Sicurezza (DIS) - Department of Information for Security", "title": "Italy" }, { "paragraph_id": 15, "text": "", "title": "Jordan" }, { "paragraph_id": 16, "text": "", "title": "Kenya" }, { "paragraph_id": 17, "text": "", "title": "Kyrgyzstan" }, { "paragraph_id": 18, "text": "", "title": "Madagascar" }, { "paragraph_id": 19, "text": "", "title": "Morocco" }, { "paragraph_id": 20, "text": "National Intelligence Coordination Committee (NICC)", "title": "Pakistan" }, { "paragraph_id": 21, "text": "Ministry of Defense (MINDEF)", "title": "Singapore" }, { "paragraph_id": 22, "text": "Ministry of Home Affairs (MHA)", "title": "Singapore" }, { "paragraph_id": 23, "text": "National Security Bureau", "title": "Syria" }, { "paragraph_id": 24, "text": "Domestic intelligence", "title": "United Kingdom" }, { "paragraph_id": 25, "text": "Foreign intelligence", "title": "United Kingdom" }, { "paragraph_id": 26, "text": "Signals intelligence", "title": "United Kingdom" }, { "paragraph_id": 27, "text": "Criminal Intelligence and Protected Persons", "title": "United Kingdom" } ]
This is a list of intelligence agencies by country. It includes only currently operational institutions. An intelligence agency is a government agency responsible for the collection, analysis, and exploitation of information in support of law enforcement, national security, military, and foreign policy objectives.
2001-11-26T15:17:16Z
2023-12-28T18:29:04Z
[ "Template:See also", "Template:Reflist", "Template:TOC right", "Template:Cite news", "Template:Cite book", "Template:Ill", "Template:Cite journal", "Template:National Intelligence Agencies", "Template:Intelligence cycle management", "Template:Authority control", "Template:Cite web", "Template:National intelligence agencies in Europe", "Template:Short description", "Template:Incomplete list", "Template:Main", "Template:Lang", "Template:Spaced ndash" ]
https://en.wikipedia.org/wiki/List_of_intelligence_agencies
15,285
Internet Engineering Task Force
The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors. The IETF was initially supported by the federal government of the United States but since 1993 has operated under the auspices of the Internet Society, a non-profit organization with local chapters around the world. The IETF is organized into a large number of working groups and birds of a feather informal discussion groups, each dealing with a specific topic. The IETF operates in a bottom-up task creation mode, largely driven by these working groups. Each working group has an appointed chairperson (or sometimes several co-chairs); a charter that describes its focus; and what it is expected to produce, and when. It is open to all who want to participate and holds discussions on an open mailing list or at IETF meetings, where the entry fee in July 2014 was US$650 per person. As of mid-2018 the fees are: early bird $700, late payment $875, student $150 and a one-day pass for $375. Rough consensus is the primary basis for decision making. There are no formal voting procedures. Because the majority of the IETF's work is done via mailing lists, meeting attendance is not required for contributors. Each working group is intended to complete work on its topic and then disband. In some cases, the working group will instead have its charter updated to take on new tasks as appropriate. The working groups are organized into areas by subject matter. Current areas are Applications, General, Internet, Operations and Management, Real-time Applications and Infrastructure, Routing, Security, and Transport. Each area is overseen by an area director (AD), with most areas having two co-ADs. The ADs are responsible for appointing working group chairs. The area directors, together with the IETF Chair, form the Internet Engineering Steering Group (IESG), which is responsible for the overall operation of the IETF. The Internet Architecture Board (IAB) oversees the IETF's external relationships and relations with the RFC Editor. The IAB provides long-range technical direction for Internet development. The IAB is also jointly responsible for the IETF Administrative Oversight Committee (IAOC), which oversees the IETF Administrative Support Activity (IASA), which provides logistical, etc. support for the IETF. The IAB also manages the Internet Research Task Force (IRTF), with which the IETF has a number of cross-group relations. A Nominating Committee (NomCom) of ten randomly chosen volunteers who participate regularly at meetings is vested with the power to appoint, reappoint, and remove members of the IESG, IAB, IASA, and the IAOC. To date, no one has been removed by a NomCom, although several people have resigned their positions, requiring replacements. In 1993 the IETF changed from an activity supported by the US Federal Government to an independent, international activity associated with the Internet Society, a US-based 501(c)(3) organization. Because the IETF itself does not have members, nor is it an organization per se, the Internet Society provides the financial and legal framework for the activities of the IETF and its sister bodies (IAB, IRTF). IETF activities are funded by meeting fees, meeting sponsors and by the Internet Society via its organizational membership and the proceeds of the Public Interest Registry. In December 2005 the IETF Trust was established to manage the copyrighted materials produced by the IETF. The Internet Engineering Steering Group (IESG) is a body composed of the Internet Engineering Task Force (IETF) chair and area directors. It provides the final technical review of Internet standards and is responsible for day-to-day management of the IETF. It receives appeals of the decisions of the working groups, and the IESG makes the decision to progress documents in the standards track. The chair of the IESG is the director of the General Area, who also serves as the overall IETF Chair. Members of the IESG include the two directors of each of the following areas: Liaison and ex officio members include: The Gateway Algorithms and Data Structures (GADS) Task Force was the precursor to the IETF. Its chairman was David L. Mills of the University of Delaware. In January 1986, the Internet Activities Board (IAB; now called the Internet Architecture Board) decided to divide GADS into two entities: an Internet Architecture (INARC) Task Force chaired by Mills to pursue research goals, and the IETF to handle nearer-term engineering and technology transfer issues. The first IETF chair was Mike Corrigan, who was then the technical program manager for the Defense Data Network (DDN). Also in 1986, after leaving DARPA, Robert E. Kahn founded the Corporation for National Research Initiatives (CNRI), which began providing administrative support to the IETF. In 1987, Corrigan was succeeded as IETF chair by Phill Gross. Effective March 1, 1989, but providing support dating back to late 1988, CNRI and NSF entered into a Cooperative Agreement No. NCR-8820945, wherein CNRI agreed to create and provide a "secretariat" for the "overall coordination, management and support of the work of the IAB, its various task forces and, particularly, the IETF." In 1992, CNRI supported the formation and early funding of the Internet Society, which took on the IETF as a fiscally sponsored project, along with the IAB, the IRTF, and the organization of annual INET meetings. Phill Gross continued to serve as IETF chair throughout this transition. Cerf, Kahn, and Lyman Chapin announced the formation of ISOC as "a professional society to facilitate, support, and promote the evolution and growth of the Internet as a global research communications infrastructure". At the first board meeting of the Internet Society, Vint Cerf, representing CNRI, offered, "In the event a deficit occurs, CNRI has agreed to contribute up to USD102000 to offset it." In 1993, Cerf continued to support the formation of ISOC while working for CNRI, and the role of ISOC in "the official procedures for creating and documenting Internet Standards" was codified in the IETF's RFC 1602. In 1995, IETF's RFC 2031 describes ISOC's role in the IETF as being purely administrative, and ISOC as having "no influence whatsoever on the Internet Standards process, the Internet Standards or their technical content". In 1998, CNRI established Foretec Seminars, Inc. (Foretec), a for-profit subsidiary to take over providing Secretariat services to the IETF. Foretec provided these services until at least 2004. By 2013, Foretec was dissolved. In 2003, IETF's RFC 3677 described IETFs role in appointing 3 board members to the ISOC's board of directors. In 2018, ISOC established The IETF Administration LLC, a separate LLC to handle the administration of the IETF. In 2019, the LLC issued a call for proposals to provide secretariat services to the IETF. The first IETF meeting was attended by 21 US Federal Government-funded researchers on 16 January 1986. It was a continuation of the work of the earlier GADS Task Force. Representatives from non-governmental entities (such as gateway vendors) were invited to attend starting with the fourth IETF meeting in October 1986. Since that time all IETF meetings have been open to the public. Initially, the IETF met quarterly, but from 1991, it has been meeting three times a year. The initial meetings were very small, with fewer than 35 people in attendance at each of the first five meetings. The maximum attendance during the first 13 meetings was only 120 attendees. This occurred at the 12th meeting held during January 1989. These meetings have grown in both participation and scope a great deal since the early 1990s; it had a maximum attendance of 2,810 at the December 2000 IETF held in San Diego, California. Attendance declined with industry restructuring during the early 2000s, and is currently around 1,200. The location for IETF meetings vary greatly. A list of past and future meeting locations can be found on the IETF meetings page. The IETF strives to hold its meetings near where most of the IETF volunteers are located. For many years, the goal was three meetings a year, with two in North America and one in either Europe or Asia, alternating between them every other year. The current goal is to hold three meetings in North America, two in Europe and one in Asia during a two-year period. However, corporate sponsorship of the meetings is also an important factor and the schedule has been modified from time to time in order to decrease operational costs. The IETF also organizes hackathons during the IETF meetings. The focus is on implementing code that will improve standards in terms of quality and interoperability. The details of IETF operations have changed considerably as the organization has grown, but the basic mechanism remains publication of proposed specifications, development based on the proposals, review and independent testing by participants, and republication as a revised proposal, a draft proposal, or eventually as an Internet Standard. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. Multiple, working, useful, interoperable implementations are the chief requirement before an IETF proposed specification can become a standard. Most specifications are focused on single protocols rather than tightly interlocked systems. This has allowed the protocols to be used in many different systems, and its standards are routinely re-used by bodies which create full-fledged architectures (e.g. 3GPP IMS). Because it relies on volunteers and uses "rough consensus and running code" as its touchstone, results can be slow whenever the number of volunteers is either too small to make progress, or so large as to make consensus difficult, or when volunteers lack the necessary expertise. For protocols like SMTP, which is used to transport e-mail for a user community in the many hundreds of millions, there is also considerable resistance to any change that is not fully backward compatible, except for IPv6. Work within the IETF on ways to improve the speed of the standards-making process is ongoing but, because the number of volunteers with opinions on it is very great, consensus on improvements has been slow to develop. The IETF cooperates with the W3C, ISO/IEC, ITU, and other standards bodies. Statistics are available that show who the top contributors by RFC publication are. While the IETF only allows for participation by individuals, and not by corporations or governments, sponsorship information is available from these statistics. The IETF Chairperson is selected by the Nominating Committee (NomCom) process for a 2-year renewable term. Before 1993, the IETF Chair was selected by the IAB. A list of the past and current Chairs of the IETF follows: The IETF works on a broad range of networking technologies which provide foundation for the Internet's growth and evolution. It aims to improve the efficiency in management of networks as they grow in size and complexity. The IETF is also standardizing protocols for autonomic networking that enables networks to be self managing. It is a network of physical objects or things that are embedded with electronics, sensors, software and also enables objects to exchange data with operator, manufacturer and other connected devices. Several IETF working groups are developing protocols that are directly relevant to IoT. Its development provides the ability of internet applications to send data over the Internet. There are some well-established transport protocols such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) which are continuously getting extended and refined to meet the needs of the global Internet. It divides its work into a number of areas that have Working groups that have a relation to an area's focus. Area Directors handle the primary task of area management. Area Directors may be advised by one or more Directorates. The area structure is defined by the Internet Engineering Steering Group. The Nominations Committee can be used to add new members. In October 2018, Microsoft and Google engineers introduced a plan to create the Token Binding Protocol in order to stop replay attacks on OAuth tokens.
[ { "paragraph_id": 0, "text": "The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors.", "title": "" }, { "paragraph_id": 1, "text": "The IETF was initially supported by the federal government of the United States but since 1993 has operated under the auspices of the Internet Society, a non-profit organization with local chapters around the world.", "title": "" }, { "paragraph_id": 2, "text": "The IETF is organized into a large number of working groups and birds of a feather informal discussion groups, each dealing with a specific topic. The IETF operates in a bottom-up task creation mode, largely driven by these working groups. Each working group has an appointed chairperson (or sometimes several co-chairs); a charter that describes its focus; and what it is expected to produce, and when. It is open to all who want to participate and holds discussions on an open mailing list or at IETF meetings, where the entry fee in July 2014 was US$650 per person. As of mid-2018 the fees are: early bird $700, late payment $875, student $150 and a one-day pass for $375.", "title": "Organization" }, { "paragraph_id": 3, "text": "Rough consensus is the primary basis for decision making. There are no formal voting procedures. Because the majority of the IETF's work is done via mailing lists, meeting attendance is not required for contributors. Each working group is intended to complete work on its topic and then disband. In some cases, the working group will instead have its charter updated to take on new tasks as appropriate.", "title": "Organization" }, { "paragraph_id": 4, "text": "The working groups are organized into areas by subject matter. Current areas are Applications, General, Internet, Operations and Management, Real-time Applications and Infrastructure, Routing, Security, and Transport. Each area is overseen by an area director (AD), with most areas having two co-ADs. The ADs are responsible for appointing working group chairs. The area directors, together with the IETF Chair, form the Internet Engineering Steering Group (IESG), which is responsible for the overall operation of the IETF.", "title": "Organization" }, { "paragraph_id": 5, "text": "The Internet Architecture Board (IAB) oversees the IETF's external relationships and relations with the RFC Editor. The IAB provides long-range technical direction for Internet development. The IAB is also jointly responsible for the IETF Administrative Oversight Committee (IAOC), which oversees the IETF Administrative Support Activity (IASA), which provides logistical, etc. support for the IETF. The IAB also manages the Internet Research Task Force (IRTF), with which the IETF has a number of cross-group relations.", "title": "Organization" }, { "paragraph_id": 6, "text": "A Nominating Committee (NomCom) of ten randomly chosen volunteers who participate regularly at meetings is vested with the power to appoint, reappoint, and remove members of the IESG, IAB, IASA, and the IAOC. To date, no one has been removed by a NomCom, although several people have resigned their positions, requiring replacements.", "title": "Organization" }, { "paragraph_id": 7, "text": "In 1993 the IETF changed from an activity supported by the US Federal Government to an independent, international activity associated with the Internet Society, a US-based 501(c)(3) organization. Because the IETF itself does not have members, nor is it an organization per se, the Internet Society provides the financial and legal framework for the activities of the IETF and its sister bodies (IAB, IRTF). IETF activities are funded by meeting fees, meeting sponsors and by the Internet Society via its organizational membership and the proceeds of the Public Interest Registry.", "title": "Organization" }, { "paragraph_id": 8, "text": "In December 2005 the IETF Trust was established to manage the copyrighted materials produced by the IETF.", "title": "Organization" }, { "paragraph_id": 9, "text": "The Internet Engineering Steering Group (IESG) is a body composed of the Internet Engineering Task Force (IETF) chair and area directors. It provides the final technical review of Internet standards and is responsible for day-to-day management of the IETF. It receives appeals of the decisions of the working groups, and the IESG makes the decision to progress documents in the standards track.", "title": "Organization" }, { "paragraph_id": 10, "text": "The chair of the IESG is the director of the General Area, who also serves as the overall IETF Chair. Members of the IESG include the two directors of each of the following areas:", "title": "Organization" }, { "paragraph_id": 11, "text": "Liaison and ex officio members include:", "title": "Organization" }, { "paragraph_id": 12, "text": "The Gateway Algorithms and Data Structures (GADS) Task Force was the precursor to the IETF. Its chairman was David L. Mills of the University of Delaware.", "title": "Early leadership and administrative history" }, { "paragraph_id": 13, "text": "In January 1986, the Internet Activities Board (IAB; now called the Internet Architecture Board) decided to divide GADS into two entities: an Internet Architecture (INARC) Task Force chaired by Mills to pursue research goals, and the IETF to handle nearer-term engineering and technology transfer issues. The first IETF chair was Mike Corrigan, who was then the technical program manager for the Defense Data Network (DDN). Also in 1986, after leaving DARPA, Robert E. Kahn founded the Corporation for National Research Initiatives (CNRI), which began providing administrative support to the IETF.", "title": "Early leadership and administrative history" }, { "paragraph_id": 14, "text": "In 1987, Corrigan was succeeded as IETF chair by Phill Gross.", "title": "Early leadership and administrative history" }, { "paragraph_id": 15, "text": "Effective March 1, 1989, but providing support dating back to late 1988, CNRI and NSF entered into a Cooperative Agreement No. NCR-8820945, wherein CNRI agreed to create and provide a \"secretariat\" for the \"overall coordination, management and support of the work of the IAB, its various task forces and, particularly, the IETF.\"", "title": "Early leadership and administrative history" }, { "paragraph_id": 16, "text": "In 1992, CNRI supported the formation and early funding of the Internet Society, which took on the IETF as a fiscally sponsored project, along with the IAB, the IRTF, and the organization of annual INET meetings. Phill Gross continued to serve as IETF chair throughout this transition. Cerf, Kahn, and Lyman Chapin announced the formation of ISOC as \"a professional society to facilitate, support, and promote the evolution and growth of the Internet as a global research communications infrastructure\". At the first board meeting of the Internet Society, Vint Cerf, representing CNRI, offered, \"In the event a deficit occurs, CNRI has agreed to contribute up to USD102000 to offset it.\" In 1993, Cerf continued to support the formation of ISOC while working for CNRI, and the role of ISOC in \"the official procedures for creating and documenting Internet Standards\" was codified in the IETF's RFC 1602.", "title": "Early leadership and administrative history" }, { "paragraph_id": 17, "text": "In 1995, IETF's RFC 2031 describes ISOC's role in the IETF as being purely administrative, and ISOC as having \"no influence whatsoever on the Internet Standards process, the Internet Standards or their technical content\".", "title": "Early leadership and administrative history" }, { "paragraph_id": 18, "text": "In 1998, CNRI established Foretec Seminars, Inc. (Foretec), a for-profit subsidiary to take over providing Secretariat services to the IETF. Foretec provided these services until at least 2004. By 2013, Foretec was dissolved.", "title": "Early leadership and administrative history" }, { "paragraph_id": 19, "text": "In 2003, IETF's RFC 3677 described IETFs role in appointing 3 board members to the ISOC's board of directors.", "title": "Early leadership and administrative history" }, { "paragraph_id": 20, "text": "In 2018, ISOC established The IETF Administration LLC, a separate LLC to handle the administration of the IETF. In 2019, the LLC issued a call for proposals to provide secretariat services to the IETF.", "title": "Early leadership and administrative history" }, { "paragraph_id": 21, "text": "The first IETF meeting was attended by 21 US Federal Government-funded researchers on 16 January 1986. It was a continuation of the work of the earlier GADS Task Force. Representatives from non-governmental entities (such as gateway vendors) were invited to attend starting with the fourth IETF meeting in October 1986. Since that time all IETF meetings have been open to the public.", "title": "Meetings" }, { "paragraph_id": 22, "text": "Initially, the IETF met quarterly, but from 1991, it has been meeting three times a year. The initial meetings were very small, with fewer than 35 people in attendance at each of the first five meetings. The maximum attendance during the first 13 meetings was only 120 attendees. This occurred at the 12th meeting held during January 1989. These meetings have grown in both participation and scope a great deal since the early 1990s; it had a maximum attendance of 2,810 at the December 2000 IETF held in San Diego, California. Attendance declined with industry restructuring during the early 2000s, and is currently around 1,200.", "title": "Meetings" }, { "paragraph_id": 23, "text": "The location for IETF meetings vary greatly. A list of past and future meeting locations can be found on the IETF meetings page. The IETF strives to hold its meetings near where most of the IETF volunteers are located. For many years, the goal was three meetings a year, with two in North America and one in either Europe or Asia, alternating between them every other year. The current goal is to hold three meetings in North America, two in Europe and one in Asia during a two-year period. However, corporate sponsorship of the meetings is also an important factor and the schedule has been modified from time to time in order to decrease operational costs.", "title": "Meetings" }, { "paragraph_id": 24, "text": "The IETF also organizes hackathons during the IETF meetings. The focus is on implementing code that will improve standards in terms of quality and interoperability.", "title": "Meetings" }, { "paragraph_id": 25, "text": "The details of IETF operations have changed considerably as the organization has grown, but the basic mechanism remains publication of proposed specifications, development based on the proposals, review and independent testing by participants, and republication as a revised proposal, a draft proposal, or eventually as an Internet Standard. IETF standards are developed in an open, all-inclusive process in which any interested individual can participate. All IETF documents are freely available over the Internet and can be reproduced at will. Multiple, working, useful, interoperable implementations are the chief requirement before an IETF proposed specification can become a standard. Most specifications are focused on single protocols rather than tightly interlocked systems. This has allowed the protocols to be used in many different systems, and its standards are routinely re-used by bodies which create full-fledged architectures (e.g. 3GPP IMS).", "title": "Operations" }, { "paragraph_id": 26, "text": "Because it relies on volunteers and uses \"rough consensus and running code\" as its touchstone, results can be slow whenever the number of volunteers is either too small to make progress, or so large as to make consensus difficult, or when volunteers lack the necessary expertise. For protocols like SMTP, which is used to transport e-mail for a user community in the many hundreds of millions, there is also considerable resistance to any change that is not fully backward compatible, except for IPv6. Work within the IETF on ways to improve the speed of the standards-making process is ongoing but, because the number of volunteers with opinions on it is very great, consensus on improvements has been slow to develop.", "title": "Operations" }, { "paragraph_id": 27, "text": "The IETF cooperates with the W3C, ISO/IEC, ITU, and other standards bodies.", "title": "Operations" }, { "paragraph_id": 28, "text": "Statistics are available that show who the top contributors by RFC publication are. While the IETF only allows for participation by individuals, and not by corporations or governments, sponsorship information is available from these statistics.", "title": "Operations" }, { "paragraph_id": 29, "text": "The IETF Chairperson is selected by the Nominating Committee (NomCom) process for a 2-year renewable term. Before 1993, the IETF Chair was selected by the IAB.", "title": "Chairs" }, { "paragraph_id": 30, "text": "A list of the past and current Chairs of the IETF follows:", "title": "Chairs" }, { "paragraph_id": 31, "text": "The IETF works on a broad range of networking technologies which provide foundation for the Internet's growth and evolution.", "title": "Topics of interest" }, { "paragraph_id": 32, "text": "It aims to improve the efficiency in management of networks as they grow in size and complexity. The IETF is also standardizing protocols for autonomic networking that enables networks to be self managing.", "title": "Topics of interest" }, { "paragraph_id": 33, "text": "It is a network of physical objects or things that are embedded with electronics, sensors, software and also enables objects to exchange data with operator, manufacturer and other connected devices. Several IETF working groups are developing protocols that are directly relevant to IoT.", "title": "Topics of interest" }, { "paragraph_id": 34, "text": "Its development provides the ability of internet applications to send data over the Internet. There are some well-established transport protocols such as TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) which are continuously getting extended and refined to meet the needs of the global Internet.", "title": "Topics of interest" }, { "paragraph_id": 35, "text": "It divides its work into a number of areas that have Working groups that have a relation to an area's focus. Area Directors handle the primary task of area management. Area Directors may be advised by one or more Directorates. The area structure is defined by the Internet Engineering Steering Group. The Nominations Committee can be used to add new members.", "title": "Topics of interest" }, { "paragraph_id": 36, "text": "In October 2018, Microsoft and Google engineers introduced a plan to create the Token Binding Protocol in order to stop replay attacks on OAuth tokens.", "title": "Topics of interest" } ]
The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors. The IETF was initially supported by the federal government of the United States but since 1993 has operated under the auspices of the Internet Society, a non-profit organization with local chapters around the world.
2001-05-16T17:24:54Z
2023-12-11T17:51:02Z
[ "Template:Div col end", "Template:ISBN", "Template:Cite journal", "Template:Official Website", "Template:Short description", "Template:Internet", "Template:Citation needed", "Template:Prose", "Template:Portal", "Template:Reflist", "Template:Webarchive", "Template:Authority control", "Template:Use mdy dates", "Template:IETF RFC", "Template:Cite web", "Template:Infobox organization", "Template:Use American English", "Template:Div col", "Template:Cite book", "Template:Redirect" ]
https://en.wikipedia.org/wiki/Internet_Engineering_Task_Force
15,286
ISM radio band
The ISM radio bands are portions of the radio spectrum reserved internationally for industrial, scientific, and medical (ISM) purposes, excluding applications in telecommunications. Examples of applications for the use of radio frequency (RF) energy in these bands include radio-frequency process heating, microwave ovens, and medical diathermy machines. The powerful emissions of these devices can create electromagnetic interference and disrupt radio communication using the same frequency, so these devices are limited to certain bands of frequencies. In general, communications equipment operating in ISM bands must tolerate any interference generated by ISM applications, and users have no regulatory protection from ISM device operation in these bands. Despite the intent of the original allocations, in recent years the fastest-growing use of these bands has been for short-range, low-power wireless communications systems, since these bands are often approved for such devices, which can be used without a government license, as would otherwise be required for transmitters; ISM frequencies are often chosen for this purpose as they already must tolerate interference issues. Cordless phones, Bluetooth devices, near-field communication (NFC) devices, garage door openers, baby monitors, and wireless computer networks (Wi-Fi) may all use the ISM frequencies, although these low-power transmitters are not considered to be ISM devices. The ISM bands are defined by the ITU Radio Regulations (article 5) in footnotes 5.138, 5.150, and 5.280 of the Radio Regulations. Individual countries' use of the bands designated in these sections may differ due to variations in national radio regulations. Because communication devices using the ISM bands must tolerate any interference from ISM equipment, unlicensed operations are typically permitted to use these bands, since unlicensed operation typically needs to be tolerant of interference from other devices anyway. The ISM bands share allocations with unlicensed and licensed operations; however, due to the high likelihood of harmful interference, licensed use of the bands is typically low. In the United States, uses of the ISM bands are governed by Part 18 of the Federal Communications Commission (FCC) rules, while Part 15 contains the rules for unlicensed communication devices, even those that share ISM frequencies. In Europe, the ETSI develops standards for the use of Short Range Devices, some of which operate in ISM bands. The use of the ISM bands are regulated by the national spectrum regulation authorities that are members of the CEPT. The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012). In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. Type A (footnote 5.138) = frequency bands are designated for ISM applications. The use of these frequency bands for ISM applications shall be subject to special authorization by the administration concerned, in agreement with other administrations whose radiocommunication services might be affected. In applying this provision, administrations shall have due regard to the latest relevant ITU-R Recommendations. Type B (footnote 5.150) = frequency bands are also designated for ISM applications. Radiocommunication services operating within these bands must accept harmful interference which may be caused by these applications. ITU RR, (Footnote 5.280) = In Germany, Austria, Bosnia and Herzegovina, Croatia, North Macedonia, Liechtenstein, Montenegro, Portugal, Serbia, Slovenia and Switzerland, the band 433.05-434.79 MHz (center frequency 433.92 MHz) is designated for ISM applications. Radio communication services of these countries operating within this band must accept harmful interference which may be caused by these applications. The ISM bands were first established at the International Telecommunications Conference of the ITU in Atlantic City, 1947. The American delegation specifically proposed several bands, including the now commonplace 2.4 GHz band, to accommodate the then nascent process of microwave heating; however, FCC annual reports of that time suggest that much preparation was done ahead of these presentations. The report of the August 9th 1947 meeting of the Allocation of Frequencies committee includes the remark: "The delegate of the United States, referring to his request that the frequency 2450 Mc/s be allocated for I.S.M., indicated that there was in existence in the United States, and working on this frequency a diathermy machine and an electronic cooker, and that the latter might eventually be installed in transatlantic ships and airplanes. There was therefore some point in attempting to reach world agreement on this subject." Radio frequencies in the ISM bands have been used for communication purposes, although such devices may experience interference from non-communication sources. In the United States, as early as 1958 Class D Citizens Band, a Part 95 service, was allocated to frequencies that are also allocated to ISM. [1] In the U.S., the FCC first made unlicensed spread spectrum available in the ISM bands in rules adopted on May 9, 1985. Many other countries later developed similar regulations, enabling use of this technology. The FCC action was proposed by Michael Marcus of the FCC staff in 1980 and the subsequent regulatory action took five more years. It was part of a broader proposal to allow civil use of spread spectrum technology and was opposed at the time by mainstream equipment manufacturers and many radio system operators. Industrial, scientific and medical (ISM) applications (of radio frequency energy) (short: ISM applications) are – according to article 1.15 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) – defined as «Operation of equipment or appliances designed to generate and use locally radio frequency energy for industrial, scientific, medical, domestic or similar purposes, excluding applications in the field of telecommunications.» The original ISM specifications envisioned that the bands would be used primarily for noncommunication purposes, such as heating. The bands are still widely used for these purposes. For many people, the most commonly encountered ISM device is the home microwave oven operating at 2.45 GHz which uses microwaves to cook food. Industrial heating is another big application area; such as induction heating, microwave heat treating, plastic softening, and plastic welding processes. In medical settings, shortwave and microwave diathermy machines use radio waves in the ISM bands to apply deep heating to the body for relaxation and healing. More recently hyperthermia therapy uses microwaves to heat tissue to kill cancer cells. However, as detailed below, the increasing congestion of the radio spectrum, the increasing sophistication of microelectronics, and the attraction of unlicensed use, has in recent decades led to an explosion of uses of these bands for short range communication systems for wireless devices, which are now by far the largest uses of these bands. These are sometimes called "non ISM" uses since they do not fall under the originally envisioned "industrial", "scientific", and "medical" application areas. One of the largest applications has been wireless networking (Wi-Fi). The IEEE 802.11 wireless networking protocols, the standards on which almost all wireless systems are based, use the ISM bands. Virtually all laptops, tablet computers, computer printers and cellphones now have 802.11 wireless modems using the 2.4 and 5.7 GHz ISM bands. Bluetooth is another networking technology using the 2.4 GHz band, which can be problematic given the probability of interference. Near-field communication (NFC) devices such as proximity cards and contactless smart cards use the lower frequency 13 and 27 MHz ISM bands. Other short range devices using the ISM bands are: wireless microphones, baby monitors, garage door openers, wireless doorbells, keyless entry systems for vehicles, radio control channels for UAVs (drones), wireless surveillance systems, RFID systems for merchandise, and wild animal tracking systems. Some electrodeless lamp designs are ISM devices, which use RF emissions to excite fluorescent tubes. Sulfur lamps are commercially available plasma lamps, which use a 2.45 GHz magnetron to heat sulfur into a brightly glowing plasma. Long-distance wireless power systems have been proposed and experimented with which would use high-power transmitters and rectennas, in lieu of overhead transmission lines and underground cables, to send power to remote locations. NASA has studied using microwave power transmission on 2.45 GHz to send energy collected by solar power satellites back to the ground. Also in space applications, a Helicon Double Layer ion thruster is a prototype spacecraft propulsion engine which uses a 13.56 MHz transmission to break down and heat gas into plasma. In recent years ISM bands have also been shared with (non-ISM) license-free error-tolerant communications applications such as wireless sensor networks in the 915 MHz and 2.450 GHz bands, as well as wireless LANs and cordless phones in the 915 MHz, 2.450 GHz, and 5.800 GHz bands. Because unlicensed devices are required to be tolerant of ISM emissions in these bands, unlicensed low power users are generally able to operate in these bands without causing problems for ISM users. ISM equipment does not necessarily include a radio receiver in the ISM band (e.g. a microwave oven does not have a receiver). In the United States, according to 47 CFR Part 15.5, low power communication devices must accept interference from licensed users of that frequency band, and the Part 15 device must not cause interference to licensed users. Note that the 915 MHz band should not be used in countries outside Region 2, except those that specifically allow it, such as Australia and Israel, especially those that use the GSM-900 band for cellphones. The ISM bands are also widely used for Radio-frequency identification (RFID) applications with the most commonly used band being the 13.56 MHz band used by systems compliant with ISO/IEC 14443 including those used by biometric passports and contactless smart cards. In Europe, the use of the ISM band is covered by Short Range Device regulations issued by European Commission, based on technical recommendations by CEPT and standards by ETSI. In most of Europe, LPD433 band is allowed for license-free voice communication in addition to PMR446. Wireless network devices use wavebands as follows: IEEE 802.15.4, Zigbee and other personal area networks may use the 915 MHz and 2450 MHz ISM bands because of frequency sharing between different allocations. Wireless LANs and cordless phones can also use bands other than those shared with ISM, but such uses require approval on a country by country basis. DECT phones use allocated spectrum outside the ISM bands that differs in Europe and North America. Ultra-wideband LANs require more spectrum than the ISM bands can provide, so the relevant standards such as IEEE 802.15.4a are designed to make use of spectrum outside the ISM bands. Despite the fact that these additional bands are outside the official ITU-R ISM bands, because they are used for the same types of low power personal communications, they are sometimes incorrectly referred to as ISM bands as well. Several brands of radio control equipment use the 2.4 GHz band range for low power remote control of toys, from gas powered cars to miniature aircraft. Worldwide Digital Cordless Telecommunications or WDCT is a technology that uses the 2.4 GHz radio spectrum. Google's Project Loon used ISM bands (specifically 2.4 and 5.8 GHz bands) for balloon-to-balloon and balloon-to-ground communications. Pursuant to 47 CFR Part 97 some ISM bands are used by licensed amateur radio operators for communication - including amateur television.
[ { "paragraph_id": 0, "text": "The ISM radio bands are portions of the radio spectrum reserved internationally for industrial, scientific, and medical (ISM) purposes, excluding applications in telecommunications. Examples of applications for the use of radio frequency (RF) energy in these bands include radio-frequency process heating, microwave ovens, and medical diathermy machines. The powerful emissions of these devices can create electromagnetic interference and disrupt radio communication using the same frequency, so these devices are limited to certain bands of frequencies. In general, communications equipment operating in ISM bands must tolerate any interference generated by ISM applications, and users have no regulatory protection from ISM device operation in these bands.", "title": "" }, { "paragraph_id": 1, "text": "Despite the intent of the original allocations, in recent years the fastest-growing use of these bands has been for short-range, low-power wireless communications systems, since these bands are often approved for such devices, which can be used without a government license, as would otherwise be required for transmitters; ISM frequencies are often chosen for this purpose as they already must tolerate interference issues. Cordless phones, Bluetooth devices, near-field communication (NFC) devices, garage door openers, baby monitors, and wireless computer networks (Wi-Fi) may all use the ISM frequencies, although these low-power transmitters are not considered to be ISM devices.", "title": "" }, { "paragraph_id": 2, "text": "The ISM bands are defined by the ITU Radio Regulations (article 5) in footnotes 5.138, 5.150, and 5.280 of the Radio Regulations. Individual countries' use of the bands designated in these sections may differ due to variations in national radio regulations. Because communication devices using the ISM bands must tolerate any interference from ISM equipment, unlicensed operations are typically permitted to use these bands, since unlicensed operation typically needs to be tolerant of interference from other devices anyway. The ISM bands share allocations with unlicensed and licensed operations; however, due to the high likelihood of harmful interference, licensed use of the bands is typically low. In the United States, uses of the ISM bands are governed by Part 18 of the Federal Communications Commission (FCC) rules, while Part 15 contains the rules for unlicensed communication devices, even those that share ISM frequencies. In Europe, the ETSI develops standards for the use of Short Range Devices, some of which operate in ISM bands. The use of the ISM bands are regulated by the national spectrum regulation authorities that are members of the CEPT.", "title": "Definition" }, { "paragraph_id": 3, "text": "The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012).", "title": "Definition" }, { "paragraph_id": 4, "text": "In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is within the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared.", "title": "Definition" }, { "paragraph_id": 5, "text": "Type A (footnote 5.138) = frequency bands are designated for ISM applications. The use of these frequency bands for ISM applications shall be subject to special authorization by the administration concerned, in agreement with other administrations whose radiocommunication services might be affected. In applying this provision, administrations shall have due regard to the latest relevant ITU-R Recommendations.", "title": "Definition" }, { "paragraph_id": 6, "text": "Type B (footnote 5.150) = frequency bands are also designated for ISM applications. Radiocommunication services operating within these bands must accept harmful interference which may be caused by these applications.", "title": "Definition" }, { "paragraph_id": 7, "text": "ITU RR, (Footnote 5.280) = In Germany, Austria, Bosnia and Herzegovina, Croatia, North Macedonia, Liechtenstein, Montenegro, Portugal, Serbia, Slovenia and Switzerland, the band 433.05-434.79 MHz (center frequency 433.92 MHz) is designated for ISM applications. Radio communication services of these countries operating within this band must accept harmful interference which may be caused by these applications.", "title": "Definition" }, { "paragraph_id": 8, "text": "The ISM bands were first established at the International Telecommunications Conference of the ITU in Atlantic City, 1947. The American delegation specifically proposed several bands, including the now commonplace 2.4 GHz band, to accommodate the then nascent process of microwave heating; however, FCC annual reports of that time suggest that much preparation was done ahead of these presentations.", "title": "History" }, { "paragraph_id": 9, "text": "The report of the August 9th 1947 meeting of the Allocation of Frequencies committee includes the remark:", "title": "History" }, { "paragraph_id": 10, "text": "\"The delegate of the United States, referring to his request that the frequency 2450 Mc/s be allocated for I.S.M., indicated that there was in existence in the United States, and working on this frequency a diathermy machine and an electronic cooker, and that the latter might eventually be installed in transatlantic ships and airplanes. There was therefore some point in attempting to reach world agreement on this subject.\"", "title": "History" }, { "paragraph_id": 11, "text": "Radio frequencies in the ISM bands have been used for communication purposes, although such devices may experience interference from non-communication sources. In the United States, as early as 1958 Class D Citizens Band, a Part 95 service, was allocated to frequencies that are also allocated to ISM. [1]", "title": "History" }, { "paragraph_id": 12, "text": "In the U.S., the FCC first made unlicensed spread spectrum available in the ISM bands in rules adopted on May 9, 1985.", "title": "History" }, { "paragraph_id": 13, "text": "Many other countries later developed similar regulations, enabling use of this technology. The FCC action was proposed by Michael Marcus of the FCC staff in 1980 and the subsequent regulatory action took five more years. It was part of a broader proposal to allow civil use of spread spectrum technology and was opposed at the time by mainstream equipment manufacturers and many radio system operators.", "title": "History" }, { "paragraph_id": 14, "text": "Industrial, scientific and medical (ISM) applications (of radio frequency energy) (short: ISM applications) are – according to article 1.15 of the International Telecommunication Union's (ITU) ITU Radio Regulations (RR) – defined as «Operation of equipment or appliances designed to generate and use locally radio frequency energy for industrial, scientific, medical, domestic or similar purposes, excluding applications in the field of telecommunications.»", "title": "Applications" }, { "paragraph_id": 15, "text": "The original ISM specifications envisioned that the bands would be used primarily for noncommunication purposes, such as heating. The bands are still widely used for these purposes. For many people, the most commonly encountered ISM device is the home microwave oven operating at 2.45 GHz which uses microwaves to cook food. Industrial heating is another big application area; such as induction heating, microwave heat treating, plastic softening, and plastic welding processes. In medical settings, shortwave and microwave diathermy machines use radio waves in the ISM bands to apply deep heating to the body for relaxation and healing. More recently hyperthermia therapy uses microwaves to heat tissue to kill cancer cells.", "title": "Applications" }, { "paragraph_id": 16, "text": "However, as detailed below, the increasing congestion of the radio spectrum, the increasing sophistication of microelectronics, and the attraction of unlicensed use, has in recent decades led to an explosion of uses of these bands for short range communication systems for wireless devices, which are now by far the largest uses of these bands. These are sometimes called \"non ISM\" uses since they do not fall under the originally envisioned \"industrial\", \"scientific\", and \"medical\" application areas. One of the largest applications has been wireless networking (Wi-Fi). The IEEE 802.11 wireless networking protocols, the standards on which almost all wireless systems are based, use the ISM bands. Virtually all laptops, tablet computers, computer printers and cellphones now have 802.11 wireless modems using the 2.4 and 5.7 GHz ISM bands. Bluetooth is another networking technology using the 2.4 GHz band, which can be problematic given the probability of interference. Near-field communication (NFC) devices such as proximity cards and contactless smart cards use the lower frequency 13 and 27 MHz ISM bands. Other short range devices using the ISM bands are: wireless microphones, baby monitors, garage door openers, wireless doorbells, keyless entry systems for vehicles, radio control channels for UAVs (drones), wireless surveillance systems, RFID systems for merchandise, and wild animal tracking systems.", "title": "Applications" }, { "paragraph_id": 17, "text": "Some electrodeless lamp designs are ISM devices, which use RF emissions to excite fluorescent tubes. Sulfur lamps are commercially available plasma lamps, which use a 2.45 GHz magnetron to heat sulfur into a brightly glowing plasma.", "title": "Applications" }, { "paragraph_id": 18, "text": "Long-distance wireless power systems have been proposed and experimented with which would use high-power transmitters and rectennas, in lieu of overhead transmission lines and underground cables, to send power to remote locations. NASA has studied using microwave power transmission on 2.45 GHz to send energy collected by solar power satellites back to the ground.", "title": "Applications" }, { "paragraph_id": 19, "text": "Also in space applications, a Helicon Double Layer ion thruster is a prototype spacecraft propulsion engine which uses a 13.56 MHz transmission to break down and heat gas into plasma.", "title": "Applications" }, { "paragraph_id": 20, "text": "In recent years ISM bands have also been shared with (non-ISM) license-free error-tolerant communications applications such as wireless sensor networks in the 915 MHz and 2.450 GHz bands, as well as wireless LANs and cordless phones in the 915 MHz, 2.450 GHz, and 5.800 GHz bands. Because unlicensed devices are required to be tolerant of ISM emissions in these bands, unlicensed low power users are generally able to operate in these bands without causing problems for ISM users. ISM equipment does not necessarily include a radio receiver in the ISM band (e.g. a microwave oven does not have a receiver).", "title": "Common non-ISM uses" }, { "paragraph_id": 21, "text": "In the United States, according to 47 CFR Part 15.5, low power communication devices must accept interference from licensed users of that frequency band, and the Part 15 device must not cause interference to licensed users. Note that the 915 MHz band should not be used in countries outside Region 2, except those that specifically allow it, such as Australia and Israel, especially those that use the GSM-900 band for cellphones. The ISM bands are also widely used for Radio-frequency identification (RFID) applications with the most commonly used band being the 13.56 MHz band used by systems compliant with ISO/IEC 14443 including those used by biometric passports and contactless smart cards.", "title": "Common non-ISM uses" }, { "paragraph_id": 22, "text": "In Europe, the use of the ISM band is covered by Short Range Device regulations issued by European Commission, based on technical recommendations by CEPT and standards by ETSI. In most of Europe, LPD433 band is allowed for license-free voice communication in addition to PMR446.", "title": "Common non-ISM uses" }, { "paragraph_id": 23, "text": "Wireless network devices use wavebands as follows:", "title": "Common non-ISM uses" }, { "paragraph_id": 24, "text": "IEEE 802.15.4, Zigbee and other personal area networks may use the 915 MHz and 2450 MHz ISM bands because of frequency sharing between different allocations.", "title": "Common non-ISM uses" }, { "paragraph_id": 25, "text": "Wireless LANs and cordless phones can also use bands other than those shared with ISM, but such uses require approval on a country by country basis. DECT phones use allocated spectrum outside the ISM bands that differs in Europe and North America. Ultra-wideband LANs require more spectrum than the ISM bands can provide, so the relevant standards such as IEEE 802.15.4a are designed to make use of spectrum outside the ISM bands. Despite the fact that these additional bands are outside the official ITU-R ISM bands, because they are used for the same types of low power personal communications, they are sometimes incorrectly referred to as ISM bands as well.", "title": "Common non-ISM uses" }, { "paragraph_id": 26, "text": "Several brands of radio control equipment use the 2.4 GHz band range for low power remote control of toys, from gas powered cars to miniature aircraft.", "title": "Common non-ISM uses" }, { "paragraph_id": 27, "text": "Worldwide Digital Cordless Telecommunications or WDCT is a technology that uses the 2.4 GHz radio spectrum.", "title": "Common non-ISM uses" }, { "paragraph_id": 28, "text": "Google's Project Loon used ISM bands (specifically 2.4 and 5.8 GHz bands) for balloon-to-balloon and balloon-to-ground communications.", "title": "Common non-ISM uses" }, { "paragraph_id": 29, "text": "Pursuant to 47 CFR Part 97 some ISM bands are used by licensed amateur radio operators for communication - including amateur television.", "title": "Common non-ISM uses" }, { "paragraph_id": 30, "text": "", "title": "External links" } ]
The ISM radio bands are portions of the radio spectrum reserved internationally for industrial, scientific, and medical (ISM) purposes, excluding applications in telecommunications. Examples of applications for the use of radio frequency (RF) energy in these bands include radio-frequency process heating, microwave ovens, and medical diathermy machines. The powerful emissions of these devices can create electromagnetic interference and disrupt radio communication using the same frequency, so these devices are limited to certain bands of frequencies. In general, communications equipment operating in ISM bands must tolerate any interference generated by ISM applications, and users have no regulatory protection from ISM device operation in these bands. Despite the intent of the original allocations, in recent years the fastest-growing use of these bands has been for short-range, low-power wireless communications systems, since these bands are often approved for such devices, which can be used without a government license, as would otherwise be required for transmitters; ISM frequencies are often chosen for this purpose as they already must tolerate interference issues. Cordless phones, Bluetooth devices, near-field communication (NFC) devices, garage door openers, baby monitors, and wireless computer networks (Wi-Fi) may all use the ISM frequencies, although these low-power transmitters are not considered to be ISM devices.
2001-11-26T23:41:04Z
2023-12-31T03:00:58Z
[ "Template:About-distinguish", "Template:Nowrap", "Template:Cite web", "Template:Cite report", "Template:Webarchive", "Template:Cite book", "Template:Authority control", "Template:Short description", "Template:Sort", "Template:Further", "Template:Expand section", "Template:Citation needed", "Template:Notelist" ]
https://en.wikipedia.org/wiki/ISM_radio_band
15,287
Series (mathematics)
In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance. For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise. In modern terminology, any (ordered) infinite sequence ( a 1 , a 2 , a 3 , … ) {\displaystyle (a_{1},a_{2},a_{3},\ldots )} of terms (that is, numbers, functions, or anything that can be added) defines a series, which is the operation of adding the ai one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like or, using the summation sign, The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity (if the limit exists) of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is, When this limit exists, one says that the series is convergent or summable, or that the sequence ( a 1 , a 2 , a 3 , … ) {\displaystyle (a_{1},a_{2},a_{3},\ldots )} is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent. The notation ∑ i = 1 ∞ a i {\textstyle \sum _{i=1}^{\infty }a_{i}} denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by a + b {\displaystyle a+b} both the addition—the process of adding—and its result—the sum of a and b. Generally, the terms of a series come from a ring, often the field R {\displaystyle \mathbb {R} } of the real numbers or the field C {\displaystyle \mathbb {C} } of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product. An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form where ( a n ) {\displaystyle (a_{n})} is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (an abelian group). This is an expression that is obtained from the list of terms a 0 , a 1 , … {\displaystyle a_{0},a_{1},\dots } by laying them side by side, and conjoining them with the symbol "+". A series may also be represented by using summation notation, such as If an abelian group A of terms has a concept of limit (e.g., if it is a metric space), then some series, the convergent series, can be interpreted as having a value in A, called the sum of the series. This includes the common cases from calculus, in which the group is the field of real numbers or the field of complex numbers. Given a series s = ∑ n = 0 ∞ a n {\textstyle s=\sum _{n=0}^{\infty }a_{n}} , its kth partial sum is By definition, the series ∑ n = 0 ∞ a n {\textstyle \sum _{n=0}^{\infty }a_{n}} converges to the limit L (or simply sums to L), if the sequence of its partial sums has a limit L. In this case, one usually writes A series is said to be convergent if it converges to some limit, or divergent when it does not. The value of this limit, if it exists, is then the value of the series. A series Σan is said to converge or to be convergent when the sequence (sk) of partial sums has a finite limit. If the limit of sk is infinite or does not exist, the series is said to diverge. When the limit of partial sums exists, it is called the value (or sum) of the series An easy way that an infinite series can converge is if all the an are zero for n sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense. Working out the properties of the series that converge, even if infinitely many terms are nonzero, is the essence of the study of series. Consider the example It is possible to "visualize" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: When we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly mark the next 1/4. This argument does not prove that the sum is equal to 2 (although it is), but it does prove that it is at most 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted S, it can be seen that Therefore, The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in encodes the series Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that 9 × 0.111... = 0.999... = 1, which only relies on the fact that the limit laws for series preserve the arithmetic operations; for more detail on this argument, see 0.999.... In general, the geometric series converges if and only if | z | < 1 {\textstyle |z|<1} , in which case it converges to 1 1 − z {\textstyle {1 \over 1-z}} . The harmonic series is divergent. (alternating harmonic series) and converges if the sequence bn converges to a limit L—as n goes to infinity. The value of the series is then b1 − L. converges if p > 1 and diverges for p ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of p, the sum of this series is Riemann's zeta function. and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics. Partial summation takes as input a sequence, (an), and gives as output another sequence, (SN). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that ∫ 0 x 1 d t = x . {\textstyle \int _{0}^{x}1\,dt=x.} In computer science, it is known as prefix sum. Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc. When an is a non-negative real number for every n, the sequence SN of partial sums is non-decreasing. It follows that a series Σan with non-negative terms converges if and only if the sequence SN of partial sums is bounded. For example, the series is convergent, because the inequality and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem. When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof. A series converges absolutely if the series of absolute values converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit. A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series which is convergent (and its sum is equal to ln 2 {\displaystyle \ln 2} ), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the a n {\displaystyle a_{n}} are real and S {\displaystyle S} is any real number, that one can find a reordering so that the reordered series converges with sum equal to S {\displaystyle S} . Abel's test is an important tool for handling semi-convergent series. If a series has the form where the partial sums B n = b 0 + ⋯ + b n {\displaystyle B_{n}=b_{0}+\cdots +b_{n}} are bounded, λ n {\displaystyle \lambda _{n}} has bounded variation, and lim λ n b n {\displaystyle \lim \lambda _{n}b_{n}} exists: then the series ∑ a n {\textstyle \sum a_{n}} is convergent. This applies to the point-wise convergence of many trigonometric series, as in with 0 < x < 2 π {\displaystyle 0<x<2\pi } . Abel's method consists in writing b n + 1 = B n + 1 − B n {\displaystyle b_{n+1}=B_{n+1}-B_{n}} , and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series ∑ a n {\textstyle \sum a_{n}} to the absolutely convergent series The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof). When conditions of the alternating series test are satisfied by S := ∑ m = 0 ∞ ( − 1 ) m u m {\textstyle S:=\sum _{m=0}^{\infty }(-1)^{m}u_{m}} , there is an exact error evaluation. Set s n {\displaystyle s_{n}} to be the partial sum s n := ∑ m = 0 n ( − 1 ) m u m {\textstyle s_{n}:=\sum _{m=0}^{n}(-1)^{m}u_{m}} of the given alternating series S {\displaystyle S} . Then the next inequality holds: Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated. By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated. For the matrix exponential: the following error evaluation holds (scaling and squaring method): There exist many tests that can be used to determine whether particular series converge or diverge. A series of real- or complex-valued functions converges pointwise on a set E, if the series converges for each x in E as an ordinary series of real or complex numbers. Equivalently, the partial sums converge to ƒ(x) as N → ∞ for each x ∈ E. A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function ƒ(x), and the error in approximating the limit by the Nth partial sum, can be made minimal independently of x by choosing a sufficiently large N. Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion. More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set E to a limit function ƒ provided as N → ∞. A power series is a series of the form The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series is the Taylor series of e x {\displaystyle e^{x}} at the origin and converges to it for every x. Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets. Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required. While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol "+" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras. Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series "formally", treating the symbol "+" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term. Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence. A Dirichlet series is one of the form where s is a complex number. For example, if all an are equal to 1, then the Dirichlet series is the Riemann zeta function Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(s) > 1, but the zeta function can be extended to a holomorphic function defined on C ∖ { 1 } {\displaystyle \mathbb {C} \setminus \{1\}} with a simple pole at 1. This series can be directly generalized to general Dirichlet series. A series of functions in which the terms are trigonometric functions is called a trigonometric series: The most important example of a trigonometric series is the Fourier series of a function. Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π. Mathematicians from the Kerala school were studying infinite series c. 1350 CE. In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series. The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence. Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form. Abel (1826) in his memoir on the binomial series corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of m {\displaystyle m} and x {\displaystyle x} . He showed the necessity of considering the subject of continuity in questions of convergence. Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt (1853). General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to the theory of functions, Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory. The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it successfully were Seidel and Stokes (1847–48). Cauchy took up the problem again (1853), acknowledging Abel's criticism, and reaching the same conclusions which Stokes had already found. Thomae used the doctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniform convergence, in spite of the demands of the theory of functions. A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent. Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function Genocchi (1852) has further contributed to the theory. Among the early writers was Wronski, whose "loi suprême" (1815) was hardly recognized until Cayley (1873) brought it into prominence. Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. Series for the expansion of sines and cosines, of multiple arcs in powers of the sine and cosine of the arc had been treated by Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still earlier by Vieta. Euler and Lagrange simplified the subject, as did Poinsot, Schröter, Glaisher, and Kummer. Fourier (1807) set for himself a different problem, to expand a given function of x in terms of the sines or cosines of multiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series; Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820–23) also attacked the problem from a different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly scientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement by Riemann (1854), Heine, Lipschitz, Schläfli, and du Bois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series were Dini, Hermite, Halphen, Krause, Byerly and Appell. Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers. Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, (C,k) summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series). A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summability methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits. Definitions may be given for sums over an arbitrary index set I . {\displaystyle I.} There are two main differences with the usual notion of series: first, there is no specific order given on the set I {\displaystyle I} ; second, this set I {\displaystyle I} may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set. If a : I ↦ G {\displaystyle a:I\mapsto G} is a function from an index set I {\displaystyle I} to a set G , {\displaystyle G,} then the "series" associated to a {\displaystyle a} is the formal sum of the elements a ( x ) ∈ G {\displaystyle a(x)\in G} over the index elements x ∈ I {\displaystyle x\in I} denoted by the When the index set is the natural numbers I = N , {\displaystyle I=\mathbb {N} ,} the function a : N ↦ G {\displaystyle a:\mathbb {N} \mapsto G} is a sequence denoted by a ( n ) = a n . {\displaystyle a(n)=a_{n}.} A series indexed on the natural numbers is an ordered formal sum and so we rewrite ∑ n ∈ N {\textstyle \sum _{n\in \mathbb {N} }} as ∑ n = 0 ∞ {\textstyle \sum _{n=0}^{\infty }} in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers When summing a family { a i : i ∈ I } {\displaystyle \left\{a_{i}:i\in I\right\}} of non-negative real numbers, define When the supremum is finite then the set of i ∈ I {\displaystyle i\in I} such that a i > 0 {\displaystyle a_{i}>0} is countable. Indeed, for every n ≥ 1 , {\displaystyle n\geq 1,} the cardinality | A n | {\displaystyle \left|A_{n}\right|} of the set A n = { i ∈ I : a i > 1 / n } {\displaystyle A_{n}=\left\{i\in I:a_{i}>1/n\right\}} is finite because If I {\displaystyle I} is countably infinite and enumerated as I = { i 0 , i 1 , … } {\displaystyle I=\left\{i_{0},i_{1},\ldots \right\}} then the above defined sum satisfies provided the value ∞ {\displaystyle \infty } is allowed for the sum of the series. Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions. Let a : I → X {\displaystyle a:I\to X} be a map, also denoted by ( a i ) i ∈ I , {\displaystyle \left(a_{i}\right)_{i\in I},} from some non-empty set I {\displaystyle I} into a Hausdorff abelian topological group X . {\displaystyle X.} Let Finite ( I ) {\displaystyle \operatorname {Finite} (I)} be the collection of all finite subsets of I , {\displaystyle I,} with Finite ( I ) {\displaystyle \operatorname {Finite} (I)} viewed as a directed set, ordered under inclusion ⊆ {\displaystyle \,\subseteq \,} with union as join. The family ( a i ) i ∈ I , {\displaystyle \left(a_{i}\right)_{i\in I},} is said to be unconditionally summable if the following limit, which is denoted by ∑ i ∈ I a i {\displaystyle \sum _{i\in I}a_{i}} and is called the sum of ( a i ) i ∈ I , {\displaystyle \left(a_{i}\right)_{i\in I},} exists in X : {\displaystyle X:} Saying that the sum S := ∑ i ∈ I a i {\displaystyle S:=\sum _{i\in I}a_{i}} is the limit of finite partial sums means that for every neighborhood V {\displaystyle V} of the origin in X , {\displaystyle X,} there exists a finite subset A 0 {\displaystyle A_{0}} of I {\displaystyle I} such that Because Finite ( I ) {\displaystyle \operatorname {Finite} (I)} is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net. For every neighborhood W {\displaystyle W} of the origin in X , {\displaystyle X,} there is a smaller neighborhood V {\displaystyle V} such that V − V ⊆ W . {\displaystyle V-V\subseteq W.} It follows that the finite partial sums of an unconditionally summable family ( a i ) i ∈ I , {\displaystyle \left(a_{i}\right)_{i\in I},} form a Cauchy net, that is, for every neighborhood W {\displaystyle W} of the origin in X , {\displaystyle X,} there exists a finite subset A 0 {\displaystyle A_{0}} of I {\displaystyle I} such that which implies that a i ∈ W {\displaystyle a_{i}\in W} for every i ∈ I ∖ A 0 {\displaystyle i\in I\setminus A_{0}} (by taking A 1 := A 0 ∪ { i } {\displaystyle A_{1}:=A_{0}\cup \{i\}} and A 2 := A 0 {\displaystyle A_{2}:=A_{0}} ). When X {\displaystyle X} is complete, a family ( a i ) i ∈ I {\displaystyle \left(a_{i}\right)_{i\in I}} is unconditionally summable in X {\displaystyle X} if and only if the finite sums satisfy the latter Cauchy net condition. When X {\displaystyle X} is complete and ( a i ) i ∈ I , {\displaystyle \left(a_{i}\right)_{i\in I},} is unconditionally summable in X , {\displaystyle X,} then for every subset J ⊆ I , {\displaystyle J\subseteq I,} the corresponding subfamily ( a j ) j ∈ J , {\displaystyle \left(a_{j}\right)_{j\in J},} is also unconditionally summable in X . {\displaystyle X.} When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group X = R . {\displaystyle X=\mathbb {R} .} If a family ( a i ) i ∈ I {\displaystyle \left(a_{i}\right)_{i\in I}} in X {\displaystyle X} is unconditionally summable then for every neighborhood W {\displaystyle W} of the origin in X , {\displaystyle X,} there is a finite subset A 0 ⊆ I {\displaystyle A_{0}\subseteq I} such that a i ∈ W {\displaystyle a_{i}\in W} for every index i {\displaystyle i} not in A 0 . {\displaystyle A_{0}.} If X {\displaystyle X} is a first-countable space then it follows that the set of i ∈ I {\displaystyle i\in I} such that a i ≠ 0 {\displaystyle a_{i}\neq 0} is countable. This need not be true in a general abelian topological group (see examples below). Suppose that I = N . {\displaystyle I=\mathbb {N} .} If a family a n , n ∈ N , {\displaystyle a_{n},n\in \mathbb {N} ,} is unconditionally summable in a Hausdorff abelian topological group X , {\displaystyle X,} then the series in the usual sense converges and has the same sum, By nature, the definition of unconditional summability is insensitive to the order of the summation. When ∑ a n {\displaystyle \sum a_{n}} is unconditionally summable, then the series remains convergent after any permutation σ : N → N {\displaystyle \sigma :\mathbb {N} \to \mathbb {N} } of the set N {\displaystyle \mathbb {N} } of indices, with the same sum, Conversely, if every permutation of a series ∑ a n {\displaystyle \sum a_{n}} converges, then the series is unconditionally convergent. When X {\displaystyle X} is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if X {\displaystyle X} is a Banach space, this is equivalent to say that for every sequence of signs ε n = ± 1 {\displaystyle \varepsilon _{n}=\pm 1} , the series converges in X . {\displaystyle X.} If X {\displaystyle X} is a topological vector space (TVS) and ( x i ) i ∈ I {\displaystyle \left(x_{i}\right)_{i\in I}} is a (possibly uncountable) family in X {\displaystyle X} then this family is summable if the limit lim A ∈ Finite ( I ) x A {\displaystyle \lim _{A\in \operatorname {Finite} (I)}x_{A}} of the net ( x A ) A ∈ Finite ( I ) {\displaystyle \left(x_{A}\right)_{A\in \operatorname {Finite} (I)}} exists in X , {\displaystyle X,} where Finite ( I ) {\displaystyle \operatorname {Finite} (I)} is the directed set of all finite subsets of I {\displaystyle I} directed by inclusion ⊆ {\displaystyle \,\subseteq \,} and x A := ∑ i ∈ A x i . {\textstyle x_{A}:=\sum _{i\in A}x_{i}.} It is called absolutely summable if in addition, for every continuous seminorm p {\displaystyle p} on X , {\displaystyle X,} the family ( p ( x i ) ) i ∈ I {\displaystyle \left(p\left(x_{i}\right)\right)_{i\in I}} is summable. If X {\displaystyle X} is a normable space and if ( x i ) i ∈ I {\displaystyle \left(x_{i}\right)_{i\in I}} is an absolutely summable family in X , {\displaystyle X,} then necessarily all but a countable collection of x i {\displaystyle x_{i}} ’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms. Summable families play an important role in the theory of nuclear spaces. The notion of series can be easily extended to the case of a seminormed space. If x n {\displaystyle x_{n}} is a sequence of elements of a normed space X {\displaystyle X} and if x ∈ X {\displaystyle x\in X} then the series ∑ x n {\displaystyle \sum x_{n}} converges to x {\displaystyle x} in X {\displaystyle X} if the sequence of partial sums of the series ( ∑ n = 0 N x n ) N = 1 ∞ {\textstyle \left(\sum _{n=0}^{N}x_{n}\right)_{N=1}^{\infty }} converges to x {\displaystyle x} in X {\displaystyle X} ; to wit, More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case, ∑ x n {\displaystyle \sum x_{n}} converges to x {\displaystyle x} if the sequence of partial sums converges to x . {\displaystyle x.} If ( X , | ⋅ | ) {\displaystyle (X,|\cdot |)} is a seminormed space, then the notion of absolute convergence becomes: A series ∑ i ∈ I x i {\textstyle \sum _{i\in I}x_{i}} of vectors in X {\displaystyle X} converges absolutely if in which case all but at most countably many of the values | x i | {\displaystyle \left|x_{i}\right|} are necessarily zero. If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of Dvoretzky & Rogers (1950)). Conditionally convergent series can be considered if I {\displaystyle I} is a well-ordered set, for example, an ordinal number α 0 . {\displaystyle \alpha _{0}.} In this case, define by transfinite recursion: and for a limit ordinal α , {\displaystyle \alpha ,} if this limit exists. If all limits exist up to α 0 , {\displaystyle \alpha _{0},} then the series converges. a function whose support is a singleton { a } . {\displaystyle \{a\}.} Then in the topology of pointwise convergence (that is, the sum is taken in the infinite product group Y X {\displaystyle Y^{X}} ). While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given x , {\displaystyle x,} only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is locally finite, that is, for every x {\displaystyle x} there is a neighborhood of x {\displaystyle x} in which all but a finite number of functions vanish. Any regularity property of the φ i , {\displaystyle \varphi _{i},} such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions. (in other words, ω 1 {\displaystyle \omega _{1}} copies of 1 is ω 1 {\displaystyle \omega _{1}} ) only if one takes a limit over all countable partial sums, rather than finite partial sums. This space is not separable. MR0033975
[ { "paragraph_id": 0, "text": "In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures (such as in combinatorics) through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance.", "title": "" }, { "paragraph_id": 1, "text": "For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise.", "title": "" }, { "paragraph_id": 2, "text": "In modern terminology, any (ordered) infinite sequence ( a 1 , a 2 , a 3 , … ) {\\displaystyle (a_{1},a_{2},a_{3},\\ldots )} of terms (that is, numbers, functions, or anything that can be added) defines a series, which is the operation of adding the ai one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented (or denoted) by an expression like", "title": "" }, { "paragraph_id": 3, "text": "or, using the summation sign,", "title": "" }, { "paragraph_id": 4, "text": "The infinite sequence of additions implied by a series cannot be effectively carried on (at least in a finite amount of time). However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity (if the limit exists) of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is,", "title": "" }, { "paragraph_id": 5, "text": "When this limit exists, one says that the series is convergent or summable, or that the sequence ( a 1 , a 2 , a 3 , … ) {\\displaystyle (a_{1},a_{2},a_{3},\\ldots )} is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent.", "title": "" }, { "paragraph_id": 6, "text": "The notation ∑ i = 1 ∞ a i {\\textstyle \\sum _{i=1}^{\\infty }a_{i}} denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by a + b {\\displaystyle a+b} both the addition—the process of adding—and its result—the sum of a and b.", "title": "" }, { "paragraph_id": 7, "text": "Generally, the terms of a series come from a ring, often the field R {\\displaystyle \\mathbb {R} } of the real numbers or the field C {\\displaystyle \\mathbb {C} } of the complex numbers. In this case, the set of all series is itself a ring (and even an associative algebra), in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product.", "title": "" }, { "paragraph_id": 8, "text": "An infinite series or simply a series is an infinite sum, represented by an infinite expression of the form", "title": "Basic properties" }, { "paragraph_id": 9, "text": "where ( a n ) {\\displaystyle (a_{n})} is any ordered sequence of terms, such as numbers, functions, or anything else that can be added (an abelian group). This is an expression that is obtained from the list of terms a 0 , a 1 , … {\\displaystyle a_{0},a_{1},\\dots } by laying them side by side, and conjoining them with the symbol \"+\". A series may also be represented by using summation notation, such as", "title": "Basic properties" }, { "paragraph_id": 10, "text": "If an abelian group A of terms has a concept of limit (e.g., if it is a metric space), then some series, the convergent series, can be interpreted as having a value in A, called the sum of the series. This includes the common cases from calculus, in which the group is the field of real numbers or the field of complex numbers. Given a series s = ∑ n = 0 ∞ a n {\\textstyle s=\\sum _{n=0}^{\\infty }a_{n}} , its kth partial sum is", "title": "Basic properties" }, { "paragraph_id": 11, "text": "By definition, the series ∑ n = 0 ∞ a n {\\textstyle \\sum _{n=0}^{\\infty }a_{n}} converges to the limit L (or simply sums to L), if the sequence of its partial sums has a limit L. In this case, one usually writes", "title": "Basic properties" }, { "paragraph_id": 12, "text": "A series is said to be convergent if it converges to some limit, or divergent when it does not. The value of this limit, if it exists, is then the value of the series.", "title": "Basic properties" }, { "paragraph_id": 13, "text": "A series Σan is said to converge or to be convergent when the sequence (sk) of partial sums has a finite limit. If the limit of sk is infinite or does not exist, the series is said to diverge. When the limit of partial sums exists, it is called the value (or sum) of the series", "title": "Basic properties" }, { "paragraph_id": 14, "text": "An easy way that an infinite series can converge is if all the an are zero for n sufficiently large. Such a series can be identified with a finite sum, so it is only infinite in a trivial sense.", "title": "Basic properties" }, { "paragraph_id": 15, "text": "Working out the properties of the series that converge, even if infinitely many terms are nonzero, is the essence of the study of series. Consider the example", "title": "Basic properties" }, { "paragraph_id": 16, "text": "It is possible to \"visualize\" its convergence on the real number line: we can imagine a line of length 2, with successive segments marked off of lengths 1, 1/2, 1/4, etc. There is always room to mark the next segment, because the amount of line remaining is always the same as the last segment marked: When we have marked off 1/2, we still have a piece of length 1/2 unmarked, so we can certainly mark the next 1/4. This argument does not prove that the sum is equal to 2 (although it is), but it does prove that it is at most 2. In other words, the series has an upper bound. Given that the series converges, proving that it is equal to 2 requires only elementary algebra. If the series is denoted S, it can be seen that", "title": "Basic properties" }, { "paragraph_id": 17, "text": "Therefore,", "title": "Basic properties" }, { "paragraph_id": 18, "text": "The idiom can be extended to other, equivalent notions of series. For instance, a recurring decimal, as in", "title": "Basic properties" }, { "paragraph_id": 19, "text": "encodes the series", "title": "Basic properties" }, { "paragraph_id": 20, "text": "Since these series always converge to real numbers (because of what is called the completeness property of the real numbers), to talk about the series in this way is the same as to talk about the numbers for which they stand. In particular, the decimal expansion 0.111... can be identified with 1/9. This leads to an argument that 9 × 0.111... = 0.999... = 1, which only relies on the fact that the limit laws for series preserve the arithmetic operations; for more detail on this argument, see 0.999....", "title": "Basic properties" }, { "paragraph_id": 21, "text": "In general, the geometric series", "title": "Examples of numerical series" }, { "paragraph_id": 22, "text": "converges if and only if | z | < 1 {\\textstyle |z|<1} , in which case it converges to 1 1 − z {\\textstyle {1 \\over 1-z}} .", "title": "Examples of numerical series" }, { "paragraph_id": 23, "text": "The harmonic series is divergent.", "title": "Examples of numerical series" }, { "paragraph_id": 24, "text": "(alternating harmonic series) and", "title": "Examples of numerical series" }, { "paragraph_id": 25, "text": "converges if the sequence bn converges to a limit L—as n goes to infinity. The value of the series is then b1 − L.", "title": "Examples of numerical series" }, { "paragraph_id": 26, "text": "converges if p > 1 and diverges for p ≤ 1, which can be shown with the integral criterion described below in convergence tests. As a function of p, the sum of this series is Riemann's zeta function.", "title": "Examples of numerical series" }, { "paragraph_id": 27, "text": "and their generalizations (such as basic hypergeometric series and elliptic hypergeometric series) frequently appear in integrable systems and mathematical physics.", "title": "Examples of numerical series" }, { "paragraph_id": 28, "text": "", "title": "Examples of numerical series" }, { "paragraph_id": 29, "text": "Partial summation takes as input a sequence, (an), and gives as output another sequence, (SN). It is thus a unary operation on sequences. Further, this function is linear, and thus is a linear operator on the vector space of sequences, denoted Σ. The inverse operator is the finite difference operator, denoted Δ. These behave as discrete analogues of integration and differentiation, only for series (functions of a natural number) instead of functions of a real variable. For example, the sequence (1, 1, 1, ...) has series (1, 2, 3, 4, ...) as its partial summation, which is analogous to the fact that ∫ 0 x 1 d t = x . {\\textstyle \\int _{0}^{x}1\\,dt=x.}", "title": "Calculus and partial summation as an operation on sequences" }, { "paragraph_id": 30, "text": "In computer science, it is known as prefix sum.", "title": "Calculus and partial summation as an operation on sequences" }, { "paragraph_id": 31, "text": "Series are classified not only by whether they converge or diverge, but also by the properties of the terms an (absolute or conditional convergence); type of convergence of the series (pointwise, uniform); the class of the term an (whether it is a real number, arithmetic progression, trigonometric function); etc.", "title": "Properties of series" }, { "paragraph_id": 32, "text": "When an is a non-negative real number for every n, the sequence SN of partial sums is non-decreasing. It follows that a series Σan with non-negative terms converges if and only if the sequence SN of partial sums is bounded.", "title": "Properties of series" }, { "paragraph_id": 33, "text": "For example, the series", "title": "Properties of series" }, { "paragraph_id": 34, "text": "is convergent, because the inequality", "title": "Properties of series" }, { "paragraph_id": 35, "text": "and a telescopic sum argument implies that the partial sums are bounded by 2. The exact value of the original series is the Basel problem.", "title": "Properties of series" }, { "paragraph_id": 36, "text": "When you group a series reordering of the series does not happen, so Riemann series theorem does not apply. A new series will have its partial sums as subsequence of original series, which means if the original series converges, so does the new series. But for divergent series that is not true, for example 1-1+1-1+... grouped every two elements will create 0+0+0+... series, which is convergent. On the other hand, divergence of the new series means the original series can be only divergent which is sometimes useful, like in Oresme proof.", "title": "Properties of series" }, { "paragraph_id": 37, "text": "A series", "title": "Properties of series" }, { "paragraph_id": 38, "text": "converges absolutely if the series of absolute values", "title": "Properties of series" }, { "paragraph_id": 39, "text": "converges. This is sufficient to guarantee not only that the original series converges to a limit, but also that any reordering of it converges to the same limit.", "title": "Properties of series" }, { "paragraph_id": 40, "text": "A series of real or complex numbers is said to be conditionally convergent (or semi-convergent) if it is convergent but not absolutely convergent. A famous example is the alternating series", "title": "Properties of series" }, { "paragraph_id": 41, "text": "which is convergent (and its sum is equal to ln 2 {\\displaystyle \\ln 2} ), but the series formed by taking the absolute value of each term is the divergent harmonic series. The Riemann series theorem says that any conditionally convergent series can be reordered to make a divergent series, and moreover, if the a n {\\displaystyle a_{n}} are real and S {\\displaystyle S} is any real number, that one can find a reordering so that the reordered series converges with sum equal to S {\\displaystyle S} .", "title": "Properties of series" }, { "paragraph_id": 42, "text": "Abel's test is an important tool for handling semi-convergent series. If a series has the form", "title": "Properties of series" }, { "paragraph_id": 43, "text": "where the partial sums B n = b 0 + ⋯ + b n {\\displaystyle B_{n}=b_{0}+\\cdots +b_{n}} are bounded, λ n {\\displaystyle \\lambda _{n}} has bounded variation, and lim λ n b n {\\displaystyle \\lim \\lambda _{n}b_{n}} exists:", "title": "Properties of series" }, { "paragraph_id": 44, "text": "then the series ∑ a n {\\textstyle \\sum a_{n}} is convergent. This applies to the point-wise convergence of many trigonometric series, as in", "title": "Properties of series" }, { "paragraph_id": 45, "text": "with 0 < x < 2 π {\\displaystyle 0<x<2\\pi } . Abel's method consists in writing b n + 1 = B n + 1 − B n {\\displaystyle b_{n+1}=B_{n+1}-B_{n}} , and in performing a transformation similar to integration by parts (called summation by parts), that relates the given series ∑ a n {\\textstyle \\sum a_{n}} to the absolutely convergent series", "title": "Properties of series" }, { "paragraph_id": 46, "text": "The evaluation of truncation errors is an important procedure in numerical analysis (especially validated numerics and computer-assisted proof).", "title": "Properties of series" }, { "paragraph_id": 47, "text": "When conditions of the alternating series test are satisfied by S := ∑ m = 0 ∞ ( − 1 ) m u m {\\textstyle S:=\\sum _{m=0}^{\\infty }(-1)^{m}u_{m}} , there is an exact error evaluation. Set s n {\\displaystyle s_{n}} to be the partial sum s n := ∑ m = 0 n ( − 1 ) m u m {\\textstyle s_{n}:=\\sum _{m=0}^{n}(-1)^{m}u_{m}} of the given alternating series S {\\displaystyle S} . Then the next inequality holds:", "title": "Properties of series" }, { "paragraph_id": 48, "text": "Taylor's theorem is a statement that includes the evaluation of the error term when the Taylor series is truncated.", "title": "Properties of series" }, { "paragraph_id": 49, "text": "By using the ratio, we can obtain the evaluation of the error term when the hypergeometric series is truncated.", "title": "Properties of series" }, { "paragraph_id": 50, "text": "For the matrix exponential:", "title": "Properties of series" }, { "paragraph_id": 51, "text": "the following error evaluation holds (scaling and squaring method):", "title": "Properties of series" }, { "paragraph_id": 52, "text": "There exist many tests that can be used to determine whether particular series converge or diverge.", "title": "Convergence tests" }, { "paragraph_id": 53, "text": "A series of real- or complex-valued functions", "title": "Series of functions" }, { "paragraph_id": 54, "text": "converges pointwise on a set E, if the series converges for each x in E as an ordinary series of real or complex numbers. Equivalently, the partial sums", "title": "Series of functions" }, { "paragraph_id": 55, "text": "converge to ƒ(x) as N → ∞ for each x ∈ E.", "title": "Series of functions" }, { "paragraph_id": 56, "text": "A stronger notion of convergence of a series of functions is the uniform convergence. A series converges uniformly if it converges pointwise to the function ƒ(x), and the error in approximating the limit by the Nth partial sum,", "title": "Series of functions" }, { "paragraph_id": 57, "text": "can be made minimal independently of x by choosing a sufficiently large N.", "title": "Series of functions" }, { "paragraph_id": 58, "text": "Uniform convergence is desirable for a series because many properties of the terms of the series are then retained by the limit. For example, if a series of continuous functions converges uniformly, then the limit function is also continuous. Similarly, if the ƒn are integrable on a closed and bounded interval I and converge uniformly, then the series is also integrable on I and can be integrated term-by-term. Tests for uniform convergence include the Weierstrass' M-test, Abel's uniform convergence test, Dini's test, and the Cauchy criterion.", "title": "Series of functions" }, { "paragraph_id": 59, "text": "More sophisticated types of convergence of a series of functions can also be defined. In measure theory, for instance, a series of functions converges almost everywhere if it converges pointwise except on a certain set of measure zero. Other modes of convergence depend on a different metric space structure on the space of functions under consideration. For instance, a series of functions converges in mean on a set E to a limit function ƒ provided", "title": "Series of functions" }, { "paragraph_id": 60, "text": "as N → ∞.", "title": "Series of functions" }, { "paragraph_id": 61, "text": "A power series is a series of the form", "title": "Series of functions" }, { "paragraph_id": 62, "text": "The Taylor series at a point c of a function is a power series that, in many cases, converges to the function in a neighborhood of c. For example, the series", "title": "Series of functions" }, { "paragraph_id": 63, "text": "is the Taylor series of e x {\\displaystyle e^{x}} at the origin and converges to it for every x.", "title": "Series of functions" }, { "paragraph_id": 64, "text": "Unless it converges only at x=c, such a series converges on a certain open disc of convergence centered at the point c in the complex plane, and may also converge at some of the points of the boundary of the disc. The radius of this disc is known as the radius of convergence, and can in principle be determined from the asymptotics of the coefficients an. The convergence is uniform on closed and bounded (that is, compact) subsets of the interior of the disc of convergence: to wit, it is uniformly convergent on compact sets.", "title": "Series of functions" }, { "paragraph_id": 65, "text": "Historically, mathematicians such as Leonhard Euler operated liberally with infinite series, even if they were not convergent. When calculus was put on a sound and correct foundation in the nineteenth century, rigorous proofs of the convergence of series were always required.", "title": "Series of functions" }, { "paragraph_id": 66, "text": "While many uses of power series refer to their sums, it is also possible to treat power series as formal sums, meaning that no addition operations are actually performed, and the symbol \"+\" is an abstract symbol of conjunction which is not necessarily interpreted as corresponding to addition. In this setting, the sequence of coefficients itself is of interest, rather than the convergence of the series. Formal power series are used in combinatorics to describe and study sequences that are otherwise difficult to handle, for example, using the method of generating functions. The Hilbert–Poincaré series is a formal power series used to study graded algebras.", "title": "Series of functions" }, { "paragraph_id": 67, "text": "Even if the limit of the power series is not considered, if the terms support appropriate structure then it is possible to define operations such as addition, multiplication, derivative, antiderivative for power series \"formally\", treating the symbol \"+\" as if it corresponded to addition. In the most common setting, the terms come from a commutative ring, so that the formal power series can be added term-by-term and multiplied via the Cauchy product. In this case the algebra of formal power series is the total algebra of the monoid of natural numbers over the underlying term ring. If the underlying term ring is a differential algebra, then the algebra of formal power series is also a differential algebra, with differentiation performed term-by-term.", "title": "Series of functions" }, { "paragraph_id": 68, "text": "Laurent series generalize power series by admitting terms into the series with negative as well as positive exponents. A Laurent series is thus any series of the form", "title": "Series of functions" }, { "paragraph_id": 69, "text": "If such a series converges, then in general it does so in an annulus rather than a disc, and possibly some boundary points. The series converges uniformly on compact subsets of the interior of the annulus of convergence.", "title": "Series of functions" }, { "paragraph_id": 70, "text": "A Dirichlet series is one of the form", "title": "Series of functions" }, { "paragraph_id": 71, "text": "where s is a complex number. For example, if all an are equal to 1, then the Dirichlet series is the Riemann zeta function", "title": "Series of functions" }, { "paragraph_id": 72, "text": "Like the zeta function, Dirichlet series in general play an important role in analytic number theory. Generally a Dirichlet series converges if the real part of s is greater than a number called the abscissa of convergence. In many cases, a Dirichlet series can be extended to an analytic function outside the domain of convergence by analytic continuation. For example, the Dirichlet series for the zeta function converges absolutely when Re(s) > 1, but the zeta function can be extended to a holomorphic function defined on C ∖ { 1 } {\\displaystyle \\mathbb {C} \\setminus \\{1\\}} with a simple pole at 1.", "title": "Series of functions" }, { "paragraph_id": 73, "text": "This series can be directly generalized to general Dirichlet series.", "title": "Series of functions" }, { "paragraph_id": 74, "text": "A series of functions in which the terms are trigonometric functions is called a trigonometric series:", "title": "Series of functions" }, { "paragraph_id": 75, "text": "The most important example of a trigonometric series is the Fourier series of a function.", "title": "Series of functions" }, { "paragraph_id": 76, "text": "Greek mathematician Archimedes produced the first known summation of an infinite series with a method that is still used in the area of calculus today. He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series, and gave a remarkably accurate approximation of π.", "title": "History of the theory of infinite series" }, { "paragraph_id": 77, "text": "Mathematicians from the Kerala school were studying infinite series c. 1350 CE.", "title": "History of the theory of infinite series" }, { "paragraph_id": 78, "text": "In the 17th century, James Gregory worked in the new decimal system on infinite series and published several Maclaurin series. In 1715, a general method for constructing the Taylor series for all functions for which they exist was provided by Brook Taylor. Leonhard Euler in the 18th century, developed the theory of hypergeometric series and q-series.", "title": "History of the theory of infinite series" }, { "paragraph_id": 79, "text": "The investigation of the validity of infinite series is considered to begin with Gauss in the 19th century. Euler had already considered the hypergeometric series", "title": "History of the theory of infinite series" }, { "paragraph_id": 80, "text": "on which Gauss published a memoir in 1812. It established simpler criteria of convergence, and the questions of remainders and the range of convergence.", "title": "History of the theory of infinite series" }, { "paragraph_id": 81, "text": "Cauchy (1821) insisted on strict tests of convergence; he showed that if two series are convergent their product is not necessarily so, and with him begins the discovery of effective criteria. The terms convergence and divergence had been introduced long before by Gregory (1668). Leonhard Euler and Gauss had given various criteria, and Colin Maclaurin had anticipated some of Cauchy's discoveries. Cauchy advanced the theory of power series by his expansion of a complex function in such a form.", "title": "History of the theory of infinite series" }, { "paragraph_id": 82, "text": "Abel (1826) in his memoir on the binomial series", "title": "History of the theory of infinite series" }, { "paragraph_id": 83, "text": "corrected certain of Cauchy's conclusions, and gave a completely scientific summation of the series for complex values of m {\\displaystyle m} and x {\\displaystyle x} . He showed the necessity of considering the subject of continuity in questions of convergence.", "title": "History of the theory of infinite series" }, { "paragraph_id": 84, "text": "Cauchy's methods led to special rather than general criteria, and the same may be said of Raabe (1832), who made the first elaborate investigation of the subject, of De Morgan (from 1842), whose logarithmic test DuBois-Reymond (1873) and Pringsheim (1889) have shown to fail within a certain region; of Bertrand (1842), Bonnet (1843), Malmsten (1846, 1847, the latter without integration); Stokes (1847), Paucker (1852), Chebyshev (1852), and Arndt (1853).", "title": "History of the theory of infinite series" }, { "paragraph_id": 85, "text": "General criteria began with Kummer (1835), and have been studied by Eisenstein (1847), Weierstrass in his various contributions to the theory of functions, Dini (1867), DuBois-Reymond (1873), and many others. Pringsheim's memoirs (1889) present the most complete general theory.", "title": "History of the theory of infinite series" }, { "paragraph_id": 86, "text": "The theory of uniform convergence was treated by Cauchy (1821), his limitations being pointed out by Abel, but the first to attack it successfully were Seidel and Stokes (1847–48). Cauchy took up the problem again (1853), acknowledging Abel's criticism, and reaching the same conclusions which Stokes had already found. Thomae used the doctrine (1866), but there was great delay in recognizing the importance of distinguishing between uniform and non-uniform convergence, in spite of the demands of the theory of functions.", "title": "History of the theory of infinite series" }, { "paragraph_id": 87, "text": "A series is said to be semi-convergent (or conditionally convergent) if it is convergent but not absolutely convergent.", "title": "History of the theory of infinite series" }, { "paragraph_id": 88, "text": "Semi-convergent series were studied by Poisson (1823), who also gave a general form for the remainder of the Maclaurin formula. The most important solution of the problem is due, however, to Jacobi (1834), who attacked the question of the remainder from a different standpoint and reached a different formula. This expression was also worked out, and another one given, by Malmsten (1847). Schlömilch (Zeitschrift, Vol.I, p. 192, 1856) also improved Jacobi's remainder, and showed the relation between the remainder and Bernoulli's function", "title": "History of the theory of infinite series" }, { "paragraph_id": 89, "text": "Genocchi (1852) has further contributed to the theory.", "title": "History of the theory of infinite series" }, { "paragraph_id": 90, "text": "Among the early writers was Wronski, whose \"loi suprême\" (1815) was hardly recognized until Cayley (1873) brought it into prominence.", "title": "History of the theory of infinite series" }, { "paragraph_id": 91, "text": "Fourier series were being investigated as the result of physical considerations at the same time that Gauss, Abel, and Cauchy were working out the theory of infinite series. Series for the expansion of sines and cosines, of multiple arcs in powers of the sine and cosine of the arc had been treated by Jacob Bernoulli (1702) and his brother Johann Bernoulli (1701) and still earlier by Vieta. Euler and Lagrange simplified the subject, as did Poinsot, Schröter, Glaisher, and Kummer.", "title": "History of the theory of infinite series" }, { "paragraph_id": 92, "text": "Fourier (1807) set for himself a different problem, to expand a given function of x in terms of the sines or cosines of multiples of x, a problem which he embodied in his Théorie analytique de la chaleur (1822). Euler had already given the formulas for determining the coefficients in the series; Fourier was the first to assert and attempt to prove the general theorem. Poisson (1820–23) also attacked the problem from a different standpoint. Fourier did not, however, settle the question of convergence of his series, a matter left for Cauchy (1826) to attempt and for Dirichlet (1829) to handle in a thoroughly scientific manner (see convergence of Fourier series). Dirichlet's treatment (Crelle, 1829), of trigonometric series was the subject of criticism and improvement by Riemann (1854), Heine, Lipschitz, Schläfli, and du Bois-Reymond. Among other prominent contributors to the theory of trigonometric and Fourier series were Dini, Hermite, Halphen, Krause, Byerly and Appell.", "title": "History of the theory of infinite series" }, { "paragraph_id": 93, "text": "Asymptotic series, otherwise asymptotic expansions, are infinite series whose partial sums become good approximations in the limit of some point of the domain. In general they do not converge, but they are useful as sequences of approximations, each of which provides a value close to the desired answer for a finite number of terms. The difference is that an asymptotic series cannot be made to produce an answer as exact as desired, the way that convergent series can. In fact, after a certain number of terms, a typical asymptotic series reaches its best approximation; if more terms are included, most such series will produce worse answers.", "title": "Generalizations" }, { "paragraph_id": 94, "text": "Under many circumstances, it is desirable to assign a limit to a series which fails to converge in the usual sense. A summability method is such an assignment of a limit to a subset of the set of divergent series which properly extends the classical notion of convergence. Summability methods include Cesàro summation, (C,k) summation, Abel summation, and Borel summation, in increasing order of generality (and hence applicable to increasingly divergent series).", "title": "Generalizations" }, { "paragraph_id": 95, "text": "A variety of general results concerning possible summability methods are known. The Silverman–Toeplitz theorem characterizes matrix summability methods, which are methods for summing a divergent series by applying an infinite matrix to the vector of coefficients. The most general method for summing a divergent series is non-constructive, and concerns Banach limits.", "title": "Generalizations" }, { "paragraph_id": 96, "text": "Definitions may be given for sums over an arbitrary index set I . {\\displaystyle I.} There are two main differences with the usual notion of series: first, there is no specific order given on the set I {\\displaystyle I} ; second, this set I {\\displaystyle I} may be uncountable. The notion of convergence needs to be strengthened, because the concept of conditional convergence depends on the ordering of the index set.", "title": "Generalizations" }, { "paragraph_id": 97, "text": "If a : I ↦ G {\\displaystyle a:I\\mapsto G} is a function from an index set I {\\displaystyle I} to a set G , {\\displaystyle G,} then the \"series\" associated to a {\\displaystyle a} is the formal sum of the elements a ( x ) ∈ G {\\displaystyle a(x)\\in G} over the index elements x ∈ I {\\displaystyle x\\in I} denoted by the", "title": "Generalizations" }, { "paragraph_id": 98, "text": "When the index set is the natural numbers I = N , {\\displaystyle I=\\mathbb {N} ,} the function a : N ↦ G {\\displaystyle a:\\mathbb {N} \\mapsto G} is a sequence denoted by a ( n ) = a n . {\\displaystyle a(n)=a_{n}.} A series indexed on the natural numbers is an ordered formal sum and so we rewrite ∑ n ∈ N {\\textstyle \\sum _{n\\in \\mathbb {N} }} as ∑ n = 0 ∞ {\\textstyle \\sum _{n=0}^{\\infty }} in order to emphasize the ordering induced by the natural numbers. Thus, we obtain the common notation for a series indexed by the natural numbers", "title": "Generalizations" }, { "paragraph_id": 99, "text": "When summing a family { a i : i ∈ I } {\\displaystyle \\left\\{a_{i}:i\\in I\\right\\}} of non-negative real numbers, define", "title": "Generalizations" }, { "paragraph_id": 100, "text": "When the supremum is finite then the set of i ∈ I {\\displaystyle i\\in I} such that a i > 0 {\\displaystyle a_{i}>0} is countable. Indeed, for every n ≥ 1 , {\\displaystyle n\\geq 1,} the cardinality | A n | {\\displaystyle \\left|A_{n}\\right|} of the set A n = { i ∈ I : a i > 1 / n } {\\displaystyle A_{n}=\\left\\{i\\in I:a_{i}>1/n\\right\\}} is finite because", "title": "Generalizations" }, { "paragraph_id": 101, "text": "If I {\\displaystyle I} is countably infinite and enumerated as I = { i 0 , i 1 , … } {\\displaystyle I=\\left\\{i_{0},i_{1},\\ldots \\right\\}} then the above defined sum satisfies", "title": "Generalizations" }, { "paragraph_id": 102, "text": "provided the value ∞ {\\displaystyle \\infty } is allowed for the sum of the series.", "title": "Generalizations" }, { "paragraph_id": 103, "text": "Any sum over non-negative reals can be understood as the integral of a non-negative function with respect to the counting measure, which accounts for the many similarities between the two constructions.", "title": "Generalizations" }, { "paragraph_id": 104, "text": "Let a : I → X {\\displaystyle a:I\\to X} be a map, also denoted by ( a i ) i ∈ I , {\\displaystyle \\left(a_{i}\\right)_{i\\in I},} from some non-empty set I {\\displaystyle I} into a Hausdorff abelian topological group X . {\\displaystyle X.} Let Finite ( I ) {\\displaystyle \\operatorname {Finite} (I)} be the collection of all finite subsets of I , {\\displaystyle I,} with Finite ( I ) {\\displaystyle \\operatorname {Finite} (I)} viewed as a directed set, ordered under inclusion ⊆ {\\displaystyle \\,\\subseteq \\,} with union as join. The family ( a i ) i ∈ I , {\\displaystyle \\left(a_{i}\\right)_{i\\in I},} is said to be unconditionally summable if the following limit, which is denoted by ∑ i ∈ I a i {\\displaystyle \\sum _{i\\in I}a_{i}} and is called the sum of ( a i ) i ∈ I , {\\displaystyle \\left(a_{i}\\right)_{i\\in I},} exists in X : {\\displaystyle X:}", "title": "Generalizations" }, { "paragraph_id": 105, "text": "Saying that the sum S := ∑ i ∈ I a i {\\displaystyle S:=\\sum _{i\\in I}a_{i}} is the limit of finite partial sums means that for every neighborhood V {\\displaystyle V} of the origin in X , {\\displaystyle X,} there exists a finite subset A 0 {\\displaystyle A_{0}} of I {\\displaystyle I} such that", "title": "Generalizations" }, { "paragraph_id": 106, "text": "Because Finite ( I ) {\\displaystyle \\operatorname {Finite} (I)} is not totally ordered, this is not a limit of a sequence of partial sums, but rather of a net.", "title": "Generalizations" }, { "paragraph_id": 107, "text": "For every neighborhood W {\\displaystyle W} of the origin in X , {\\displaystyle X,} there is a smaller neighborhood V {\\displaystyle V} such that V − V ⊆ W . {\\displaystyle V-V\\subseteq W.} It follows that the finite partial sums of an unconditionally summable family ( a i ) i ∈ I , {\\displaystyle \\left(a_{i}\\right)_{i\\in I},} form a Cauchy net, that is, for every neighborhood W {\\displaystyle W} of the origin in X , {\\displaystyle X,} there exists a finite subset A 0 {\\displaystyle A_{0}} of I {\\displaystyle I} such that", "title": "Generalizations" }, { "paragraph_id": 108, "text": "which implies that a i ∈ W {\\displaystyle a_{i}\\in W} for every i ∈ I ∖ A 0 {\\displaystyle i\\in I\\setminus A_{0}} (by taking A 1 := A 0 ∪ { i } {\\displaystyle A_{1}:=A_{0}\\cup \\{i\\}} and A 2 := A 0 {\\displaystyle A_{2}:=A_{0}} ).", "title": "Generalizations" }, { "paragraph_id": 109, "text": "When X {\\displaystyle X} is complete, a family ( a i ) i ∈ I {\\displaystyle \\left(a_{i}\\right)_{i\\in I}} is unconditionally summable in X {\\displaystyle X} if and only if the finite sums satisfy the latter Cauchy net condition. When X {\\displaystyle X} is complete and ( a i ) i ∈ I , {\\displaystyle \\left(a_{i}\\right)_{i\\in I},} is unconditionally summable in X , {\\displaystyle X,} then for every subset J ⊆ I , {\\displaystyle J\\subseteq I,} the corresponding subfamily ( a j ) j ∈ J , {\\displaystyle \\left(a_{j}\\right)_{j\\in J},} is also unconditionally summable in X . {\\displaystyle X.}", "title": "Generalizations" }, { "paragraph_id": 110, "text": "When the sum of a family of non-negative numbers, in the extended sense defined before, is finite, then it coincides with the sum in the topological group X = R . {\\displaystyle X=\\mathbb {R} .}", "title": "Generalizations" }, { "paragraph_id": 111, "text": "If a family ( a i ) i ∈ I {\\displaystyle \\left(a_{i}\\right)_{i\\in I}} in X {\\displaystyle X} is unconditionally summable then for every neighborhood W {\\displaystyle W} of the origin in X , {\\displaystyle X,} there is a finite subset A 0 ⊆ I {\\displaystyle A_{0}\\subseteq I} such that a i ∈ W {\\displaystyle a_{i}\\in W} for every index i {\\displaystyle i} not in A 0 . {\\displaystyle A_{0}.} If X {\\displaystyle X} is a first-countable space then it follows that the set of i ∈ I {\\displaystyle i\\in I} such that a i ≠ 0 {\\displaystyle a_{i}\\neq 0} is countable. This need not be true in a general abelian topological group (see examples below).", "title": "Generalizations" }, { "paragraph_id": 112, "text": "Suppose that I = N . {\\displaystyle I=\\mathbb {N} .} If a family a n , n ∈ N , {\\displaystyle a_{n},n\\in \\mathbb {N} ,} is unconditionally summable in a Hausdorff abelian topological group X , {\\displaystyle X,} then the series in the usual sense converges and has the same sum,", "title": "Generalizations" }, { "paragraph_id": 113, "text": "By nature, the definition of unconditional summability is insensitive to the order of the summation. When ∑ a n {\\displaystyle \\sum a_{n}} is unconditionally summable, then the series remains convergent after any permutation σ : N → N {\\displaystyle \\sigma :\\mathbb {N} \\to \\mathbb {N} } of the set N {\\displaystyle \\mathbb {N} } of indices, with the same sum,", "title": "Generalizations" }, { "paragraph_id": 114, "text": "Conversely, if every permutation of a series ∑ a n {\\displaystyle \\sum a_{n}} converges, then the series is unconditionally convergent. When X {\\displaystyle X} is complete then unconditional convergence is also equivalent to the fact that all subseries are convergent; if X {\\displaystyle X} is a Banach space, this is equivalent to say that for every sequence of signs ε n = ± 1 {\\displaystyle \\varepsilon _{n}=\\pm 1} , the series", "title": "Generalizations" }, { "paragraph_id": 115, "text": "converges in X . {\\displaystyle X.}", "title": "Generalizations" }, { "paragraph_id": 116, "text": "If X {\\displaystyle X} is a topological vector space (TVS) and ( x i ) i ∈ I {\\displaystyle \\left(x_{i}\\right)_{i\\in I}} is a (possibly uncountable) family in X {\\displaystyle X} then this family is summable if the limit lim A ∈ Finite ( I ) x A {\\displaystyle \\lim _{A\\in \\operatorname {Finite} (I)}x_{A}} of the net ( x A ) A ∈ Finite ( I ) {\\displaystyle \\left(x_{A}\\right)_{A\\in \\operatorname {Finite} (I)}} exists in X , {\\displaystyle X,} where Finite ( I ) {\\displaystyle \\operatorname {Finite} (I)} is the directed set of all finite subsets of I {\\displaystyle I} directed by inclusion ⊆ {\\displaystyle \\,\\subseteq \\,} and x A := ∑ i ∈ A x i . {\\textstyle x_{A}:=\\sum _{i\\in A}x_{i}.}", "title": "Generalizations" }, { "paragraph_id": 117, "text": "It is called absolutely summable if in addition, for every continuous seminorm p {\\displaystyle p} on X , {\\displaystyle X,} the family ( p ( x i ) ) i ∈ I {\\displaystyle \\left(p\\left(x_{i}\\right)\\right)_{i\\in I}} is summable. If X {\\displaystyle X} is a normable space and if ( x i ) i ∈ I {\\displaystyle \\left(x_{i}\\right)_{i\\in I}} is an absolutely summable family in X , {\\displaystyle X,} then necessarily all but a countable collection of x i {\\displaystyle x_{i}} ’s are zero. Hence, in normed spaces, it is usually only ever necessary to consider series with countably many terms.", "title": "Generalizations" }, { "paragraph_id": 118, "text": "Summable families play an important role in the theory of nuclear spaces.", "title": "Generalizations" }, { "paragraph_id": 119, "text": "The notion of series can be easily extended to the case of a seminormed space. If x n {\\displaystyle x_{n}} is a sequence of elements of a normed space X {\\displaystyle X} and if x ∈ X {\\displaystyle x\\in X} then the series ∑ x n {\\displaystyle \\sum x_{n}} converges to x {\\displaystyle x} in X {\\displaystyle X} if the sequence of partial sums of the series ( ∑ n = 0 N x n ) N = 1 ∞ {\\textstyle \\left(\\sum _{n=0}^{N}x_{n}\\right)_{N=1}^{\\infty }} converges to x {\\displaystyle x} in X {\\displaystyle X} ; to wit,", "title": "Generalizations" }, { "paragraph_id": 120, "text": "More generally, convergence of series can be defined in any abelian Hausdorff topological group. Specifically, in this case, ∑ x n {\\displaystyle \\sum x_{n}} converges to x {\\displaystyle x} if the sequence of partial sums converges to x . {\\displaystyle x.}", "title": "Generalizations" }, { "paragraph_id": 121, "text": "If ( X , | ⋅ | ) {\\displaystyle (X,|\\cdot |)} is a seminormed space, then the notion of absolute convergence becomes: A series ∑ i ∈ I x i {\\textstyle \\sum _{i\\in I}x_{i}} of vectors in X {\\displaystyle X} converges absolutely if", "title": "Generalizations" }, { "paragraph_id": 122, "text": "in which case all but at most countably many of the values | x i | {\\displaystyle \\left|x_{i}\\right|} are necessarily zero.", "title": "Generalizations" }, { "paragraph_id": 123, "text": "If a countable series of vectors in a Banach space converges absolutely then it converges unconditionally, but the converse only holds in finite-dimensional Banach spaces (theorem of Dvoretzky & Rogers (1950)).", "title": "Generalizations" }, { "paragraph_id": 124, "text": "Conditionally convergent series can be considered if I {\\displaystyle I} is a well-ordered set, for example, an ordinal number α 0 . {\\displaystyle \\alpha _{0}.} In this case, define by transfinite recursion:", "title": "Generalizations" }, { "paragraph_id": 125, "text": "and for a limit ordinal α , {\\displaystyle \\alpha ,}", "title": "Generalizations" }, { "paragraph_id": 126, "text": "if this limit exists. If all limits exist up to α 0 , {\\displaystyle \\alpha _{0},} then the series converges.", "title": "Generalizations" }, { "paragraph_id": 127, "text": "a function whose support is a singleton { a } . {\\displaystyle \\{a\\}.} Then", "title": "Generalizations" }, { "paragraph_id": 128, "text": "in the topology of pointwise convergence (that is, the sum is taken in the infinite product group Y X {\\displaystyle Y^{X}} ).", "title": "Generalizations" }, { "paragraph_id": 129, "text": "While, formally, this requires a notion of sums of uncountable series, by construction there are, for every given x , {\\displaystyle x,} only finitely many nonzero terms in the sum, so issues regarding convergence of such sums do not arise. Actually, one usually assumes more: the family of functions is locally finite, that is, for every x {\\displaystyle x} there is a neighborhood of x {\\displaystyle x} in which all but a finite number of functions vanish. Any regularity property of the φ i , {\\displaystyle \\varphi _{i},} such as continuity, differentiability, that is preserved under finite sums will be preserved for the sum of any subcollection of this family of functions.", "title": "Generalizations" }, { "paragraph_id": 130, "text": "(in other words, ω 1 {\\displaystyle \\omega _{1}} copies of 1 is ω 1 {\\displaystyle \\omega _{1}} ) only if one takes a limit over all countable partial sums, rather than finite partial sums. This space is not separable.", "title": "Generalizations" }, { "paragraph_id": 131, "text": "MR0033975", "title": "Bibliography" } ]
In mathematics, a series is, roughly speaking, the operation of adding infinitely many quantities, one after the other, to a given starting quantity. The study of series is a major part of calculus and its generalization, mathematical analysis. Series are used in most areas of mathematics, even for studying finite structures through generating functions. In addition to their ubiquity in mathematics, infinite series are also widely used in other quantitative disciplines such as physics, computer science, statistics and finance. For a long time, the idea that such a potentially infinite summation could produce a finite result was considered paradoxical. This paradox was resolved using the concept of a limit during the 17th century. Zeno's paradox of Achilles and the tortoise illustrates this counterintuitive property of infinite sums: Achilles runs after a tortoise, but when he reaches the position of the tortoise at the beginning of the race, the tortoise has reached a second position; when he reaches this second position, the tortoise is at a third position, and so on. Zeno concluded that Achilles could never reach the tortoise, and thus that movement does not exist. Zeno divided the race into infinitely many sub-races, each requiring a finite amount of time, so that the total time for Achilles to catch the tortoise is given by a series. The resolution of the paradox is that, although the series has an infinite number of terms, it has a finite sum, which gives the time necessary for Achilles to catch up with the tortoise. In modern terminology, any (ordered) infinite sequence of terms defines a series, which is the operation of adding the ai one after the other. To emphasize that there are an infinite number of terms, a series may be called an infinite series. Such a series is represented by an expression like or, using the summation sign, The infinite sequence of additions implied by a series cannot be effectively carried on. However, if the set to which the terms and their finite sums belong has a notion of limit, it is sometimes possible to assign a value to a series, called the sum of the series. This value is the limit as n tends to infinity of the finite sums of the n first terms of the series, which are called the nth partial sums of the series. That is, When this limit exists, one says that the series is convergent or summable, or that the sequence is summable. In this case, the limit is called the sum of the series. Otherwise, the series is said to be divergent. The notation ∑ i = 1 ∞ a i denotes both the series—that is the implicit process of adding the terms one after the other indefinitely—and, if the series is convergent, the sum of the series—the result of the process. This is a generalization of the similar convention of denoting by a + b both the addition—the process of adding—and its result—the sum of a and b. Generally, the terms of a series come from a ring, often the field R of the real numbers or the field C of the complex numbers. In this case, the set of all series is itself a ring, in which the addition consists of adding the series term by term, and the multiplication is the Cauchy product.
2001-04-17T07:04:08Z
2023-12-16T21:09:09Z
[ "Template:OEIS", "Template:Reflist", "Template:Schaefer Wolff Topological Vector Spaces", "Template:Nowrap", "Template:Cite book", "Template:Cite web", "Template:Refend", "Template:Authority control", "Template:Circa", "Template:MathSciNet", "Template:About", "Template:Math", "Template:Sfn", "Template:Citation", "Template:Refbegin", "Template:Calculus", "Template:Mvar", "Template:Main", "Template:For", "Template:Anchor", "Template:Cite journal", "Template:Narici Beckenstein Topological Vector Spaces", "Template:Analysis-footer", "Template:Short description", "Template:Em", "Template:Harvtxt", "Template:Harvnb", "Template:Trèves François Topological vector spaces, distributions and kernels", "Template:Commons category", "Template:Springer", "Template:Series (mathematics)" ]
https://en.wikipedia.org/wiki/Series_(mathematics)
15,289
Interrupt
In digital computers, an interrupt (sometimes referred to as a trap) is a request for the processor to interrupt currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error. Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implement computer multitasking and systems calls, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events. The first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. The UNIVAC 1103A computer is generally credited with the earliest use of interrupts in 1953. Earlier, on the UNIVAC I (1951) "Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop." The IBM 650 (1954) incorporated the first occurrence of interrupt masking. The National Bureau of Standards DYSEAC (1954) was the first to use interrupts for I/O. The IBM 704 was the first to use interrupts for debugging, with a "transfer trap", which could invoke a special routine when a branch instruction was encountered. The MIT Lincoln Laboratory TX-2 system (1957) was the first to provide multiple levels of priority interrupts. Interrupt signals may be issued in response to hardware or software events. These are classified as hardware interrupts or software interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture. A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from the operating system (OS) or, if there is no OS, from the bare metal program running on the CPU. Such external devices may be part of the computer (e.g., disk controller) or they may be external peripherals. For example, pressing a keyboard key or moving a mouse plugged into a PS/2 port triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Hardware interrupts can arrive asynchronously with respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries. In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device. On some older systems, such as the 1964 CDC 3600, all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or more interrupt vector tables. To mask an interrupt is to disable it, so it is deferred or ignored by the processor, while to unmask an interrupt is to enable it. Processors typically have an internal interrupt mask register, which allows selective enabling (and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are called maskable interrupts. Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are called non-maskable interrupts (NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from a watchdog timer. With regard to SPARC, the Non-Maskable Interrupt (NMI), despite having the highest priority among interrupts, can be prevented from occurring through the use of an interrupt mask. One failure mode is when the hardware does not generate the expected interrupt for a change in state, causing the operating system to wait indefinitely. Depending on the details, the failure might affect only a single process or might have global impact. Some operating systems have code specifically to deal with this. As an example, IBM Operating System/360 (OS/360) relies on a not-ready to ready device-end interrupt when a tape has been mounted on a tape drive, and will not read the tape label until that interrupt occurs or is simulated. IBM added code in OS/360 so that the VARY ONLINE command will simulate a device end interrupt on the target device. A spurious interrupt is a hardware interrupt for which no source can be found. The term "phantom interrupt" or "ghost interrupt" may also be used to describe this phenomenon. Spurious interrupts tend to be a problem with a wired-OR interrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves. In a wired-OR circuit, parasitic capacitance charging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there will not be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the "spurious" moniker. A spurious interrupt may also be the result of electrical anomalies due to faulty circuit design, high noise levels, crosstalk, timing issues, or more rarely, device errata. A spurious interrupt may result in system deadlock or other undefined operation if the ISR does not account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting. They may even lead to crashing of the computer in adverse scenarios. A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler. A software interrupt may be intentionally caused by executing a special instruction which, by design, invokes an interrupt when executed. Such instructions function similarly to subroutine calls and are used for a variety of purposes, such as requesting operating system services and interacting with device drivers (e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by the virtual memory system. Typically, the operating system kernel will catch and handle such interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of a page fault is to make the required page accessible in physical memory. But in other cases such as a segmentation fault the operating system executes a process callback. On Unix-like operating systems this involves sending a signal such as SIGSEGV, SIGBUS, SIGILL or SIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made using Structured Exception Handling with an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO. In a kernel process, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, an operating system crash may result. The terms interrupt, trap, exception, fault, and abort are used to distinguish types of interrupts, although "there is no clear consensus as to the exact meaning of these terms". The term trap may refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions with trap in their names. In some usages, the term trap refers specifically to a breakpoint intended to initiate a context switch to a monitor program or debugger. It may also refer to a synchronous interrupt caused by an exceptional condition (e.g., division by zero, invalid memory access, illegal opcode), although the term exception is more common for this. x86 divides interrupts into (hardware) interrupts and software exceptions, and identifies three types of exceptions: faults, traps, and aborts. (Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity. A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction; one prominent use is to implement system calls. An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often does not allow a restart of the program. Arm uses the term exception to refer to all types of interrupts, and divides exceptions into (hardware) interrupts, aborts, reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous. RISC-V uses interrupt as the overall term as well as for the external subset, the internal ones are called exceptions. Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes. A level-triggered interrupt is requested by holding the interrupt signal at its particular (high or low) active logic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced. The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs. Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR. An edge-triggered interrupt is an interrupt signaled by a level transition on the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect it. The important part of edge triggering is that the signal must transition to trigger the interrupt; for example, if the signal was high-low-low, there would only be one falling edge interrupt triggered, and the continued low level would not trigger a further interrupt. The signal must return to the high level and fall again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level. Computers with edge-triggered interrupts may include an interrupt register that retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well. The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring: There are several different architectures for handling interrupts. In some, there is a single interrupt handler that must scan for the highest priority enabled interrupt. In others, there are separate interrupt handlers for separate interrupt types, separate I/O channels or devices, or both. Several interrupt causes may have the same interrupt type and thus the same interrupt handler, requiring the interrupt handler to determine the cause. Interrupts may be fully handled in hardware by the CPU, or may be handled by both the CPU and another component such as a programmable interrupt controller or a southbridge. If an additional component is used, that component would be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines typically available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. In systems on a chip (SoC) implementations, interrupts come from different blocks of the chip and are usually aggregated in an interrupt controller attached to one or several processors (in a multi-core system). Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements. Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it will not interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many "lockups" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed. The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them. There are 3 ways multiple devices "sharing the same line" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscope can trigger a wide variety of shapes and conditions). Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts). Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the square of the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such as PCI Express) and relieve this problem to a considerable extent. Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts. Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time. A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system. A message-signaled interrupt does not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically a computer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write. Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge. Message-signalled interrupt vectors can be shared, to the extent that the underlying communication medium can be shared. No additional effort is required. Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines. PCI Express, a serial computer bus, uses message-signaled interrupts exclusively. In a push button analogy applied to computer systems, the term doorbell or doorbell interrupt is often used to describe a mechanism whereby a software system can signal or notify a computer hardware device that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and "ring the doorbell" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that "rings the bell" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to a hard disk drive, or send them over a network, or encrypt them, etc. The term doorbell interrupt is usually a misnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as a polled region, sometimes the doorbell region writes through to physical device registers, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one. Doorbell interrupts can be compared to Message Signaled Interrupts, as they have some similarities. In multiprocessor systems, a processor may send an interrupt request to another processor via inter-processor interrupts (IPI). Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called an interrupt storm. There are various forms of livelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution. With multi-core processors, additional performance improvements in interrupt handling can be achieved through receive-side scaling (RSS) when multiqueue NICs are used. Such NICs provide multiple receive queues associated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to as IRQ affinity) can be manually configured. A purely software-based implementation of the receiving traffic distribution, known as receive packet steering (RPS), distributes received traffic among cores later in the data path, as part of the interrupt handler functionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate of inter-processor interrupts (IPIs). Receive flow steering (RFS) takes the software-based approach further by accounting for application locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application. Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g., UART, Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals and traps. Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS task scheduler to manage execution of running processes, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such as analog-to-digital converters, incremental encoder interfaces, and GPIO inputs, and to program output devices such as digital-to-analog converters, motor controllers, and GPIO outputs. A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically cause keystrokes to be buffered so as to implement typeahead. Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family. For example floating point instructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an "illegal instruction" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed. This provides application software portability across the entire line. Interrupts are similar to signals, the difference being that signals are used for inter-process communication (IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by the kernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples are SIGSEGV, SIGBUS, SIGILL and SIGFPE).
[ { "paragraph_id": 0, "text": "In digital computers, an interrupt (sometimes referred to as a trap) is a request for the processor to interrupt currently executing code (when permitted), so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler (or an interrupt service routine, ISR) to deal with the event. This interruption is often temporary, allowing the software to resume normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error.", "title": "" }, { "paragraph_id": 1, "text": "Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implement computer multitasking and systems calls, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven.", "title": "" }, { "paragraph_id": 2, "text": "Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events. The first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions.", "title": "History" }, { "paragraph_id": 3, "text": "The UNIVAC 1103A computer is generally credited with the earliest use of interrupts in 1953. Earlier, on the UNIVAC I (1951) \"Arithmetic overflow either triggered the execution of a two-instruction fix-up routine at address 0, or, at the programmer's option, caused the computer to stop.\" The IBM 650 (1954) incorporated the first occurrence of interrupt masking. The National Bureau of Standards DYSEAC (1954) was the first to use interrupts for I/O. The IBM 704 was the first to use interrupts for debugging, with a \"transfer trap\", which could invoke a special routine when a branch instruction was encountered. The MIT Lincoln Laboratory TX-2 system (1957) was the first to provide multiple levels of priority interrupts.", "title": "History" }, { "paragraph_id": 4, "text": "Interrupt signals may be issued in response to hardware or software events. These are classified as hardware interrupts or software interrupts, respectively. For any particular processor, the number of interrupt types is limited by the architecture.", "title": "Types" }, { "paragraph_id": 5, "text": "A hardware interrupt is a condition related to the state of the hardware that may be signaled by an external hardware device, e.g., an interrupt request (IRQ) line on a PC, or detected by devices embedded in processor logic (e.g., the CPU timer in IBM System/370), to communicate that the device needs attention from the operating system (OS) or, if there is no OS, from the bare metal program running on the CPU. Such external devices may be part of the computer (e.g., disk controller) or they may be external peripherals. For example, pressing a keyboard key or moving a mouse plugged into a PS/2 port triggers hardware interrupts that cause the processor to read the keystroke or mouse position.", "title": "Types" }, { "paragraph_id": 6, "text": "Hardware interrupts can arrive asynchronously with respect to the processor clock, and at any time during instruction execution. Consequently, all incoming hardware interrupt signals are conditioned by synchronizing them to the processor clock, and acted upon only at instruction execution boundaries.", "title": "Types" }, { "paragraph_id": 7, "text": "In many systems, each device is associated with a particular IRQ signal. This makes it possible to quickly determine which hardware device is requesting service, and to expedite servicing of that device.", "title": "Types" }, { "paragraph_id": 8, "text": "On some older systems, such as the 1964 CDC 3600, all interrupts went to the same location, and the OS used a specialized instruction to determine the highest-priority outstanding unmasked interrupt. On contemporary systems, there is generally a distinct interrupt routine for each type of interrupt (or for each interrupt source), often implemented as one or more interrupt vector tables.", "title": "Types" }, { "paragraph_id": 9, "text": "To mask an interrupt is to disable it, so it is deferred or ignored by the processor, while to unmask an interrupt is to enable it.", "title": "Types" }, { "paragraph_id": 10, "text": "Processors typically have an internal interrupt mask register, which allows selective enabling (and disabling) of hardware interrupts. Each interrupt signal is associated with a bit in the mask register. On some systems, the interrupt is enabled when the bit is set, and disabled when the bit is clear. On others, the reverse is true, and a set bit disables the interrupt. When the interrupt is disabled, the associated interrupt signal may be ignored by the processor, or it may remain pending. Signals which are affected by the mask are called maskable interrupts.", "title": "Types" }, { "paragraph_id": 11, "text": "Some interrupt signals are not affected by the interrupt mask and therefore cannot be disabled; these are called non-maskable interrupts (NMIs). These indicate high-priority events which cannot be ignored under any circumstances, such as the timeout signal from a watchdog timer. With regard to SPARC, the Non-Maskable Interrupt (NMI), despite having the highest priority among interrupts, can be prevented from occurring through the use of an interrupt mask.", "title": "Types" }, { "paragraph_id": 12, "text": "One failure mode is when the hardware does not generate the expected interrupt for a change in state, causing the operating system to wait indefinitely. Depending on the details, the failure might affect only a single process or might have global impact. Some operating systems have code specifically to deal with this.", "title": "Types" }, { "paragraph_id": 13, "text": "As an example, IBM Operating System/360 (OS/360) relies on a not-ready to ready device-end interrupt when a tape has been mounted on a tape drive, and will not read the tape label until that interrupt occurs or is simulated. IBM added code in OS/360 so that the VARY ONLINE command will simulate a device end interrupt on the target device.", "title": "Types" }, { "paragraph_id": 14, "text": "A spurious interrupt is a hardware interrupt for which no source can be found. The term \"phantom interrupt\" or \"ghost interrupt\" may also be used to describe this phenomenon. Spurious interrupts tend to be a problem with a wired-OR interrupt circuit attached to a level-sensitive processor input. Such interrupts may be difficult to identify when a system misbehaves.", "title": "Types" }, { "paragraph_id": 15, "text": "In a wired-OR circuit, parasitic capacitance charging/discharging through the interrupt line's bias resistor will cause a small delay before the processor recognizes that the interrupt source has been cleared. If the interrupting device is cleared too late in the interrupt service routine (ISR), there will not be enough time for the interrupt circuit to return to the quiescent state before the current instance of the ISR terminates. The result is the processor will think another interrupt is pending, since the voltage at its interrupt request input will be not high or low enough to establish an unambiguous internal logic 1 or logic 0. The apparent interrupt will have no identifiable source, hence the \"spurious\" moniker.", "title": "Types" }, { "paragraph_id": 16, "text": "A spurious interrupt may also be the result of electrical anomalies due to faulty circuit design, high noise levels, crosstalk, timing issues, or more rarely, device errata.", "title": "Types" }, { "paragraph_id": 17, "text": "A spurious interrupt may result in system deadlock or other undefined operation if the ISR does not account for the possibility of such an interrupt occurring. As spurious interrupts are mostly a problem with wired-OR interrupt circuits, good programming practice in such systems is for the ISR to check all interrupt sources for activity and take no action (other than possibly logging the event) if none of the sources is interrupting. They may even lead to crashing of the computer in adverse scenarios.", "title": "Types" }, { "paragraph_id": 18, "text": "A software interrupt is requested by the processor itself upon executing particular instructions or when certain conditions are met. Every software interrupt signal is associated with a particular interrupt handler.", "title": "Types" }, { "paragraph_id": 19, "text": "A software interrupt may be intentionally caused by executing a special instruction which, by design, invokes an interrupt when executed. Such instructions function similarly to subroutine calls and are used for a variety of purposes, such as requesting operating system services and interacting with device drivers (e.g., to read or write storage media). Software interrupts may also be triggered by program execution errors or by the virtual memory system.", "title": "Types" }, { "paragraph_id": 20, "text": "Typically, the operating system kernel will catch and handle such interrupts. Some interrupts are handled transparently to the program - for example, the normal resolution of a page fault is to make the required page accessible in physical memory. But in other cases such as a segmentation fault the operating system executes a process callback. On Unix-like operating systems this involves sending a signal such as SIGSEGV, SIGBUS, SIGILL or SIGFPE, which may either call a signal handler or execute a default action (terminating the program). On Windows the callback is made using Structured Exception Handling with an exception code such as STATUS_ACCESS_VIOLATION or STATUS_INTEGER_DIVIDE_BY_ZERO.", "title": "Types" }, { "paragraph_id": 21, "text": "In a kernel process, it is often the case that some types of software interrupts are not supposed to happen. If they occur nonetheless, an operating system crash may result.", "title": "Types" }, { "paragraph_id": 22, "text": "The terms interrupt, trap, exception, fault, and abort are used to distinguish types of interrupts, although \"there is no clear consensus as to the exact meaning of these terms\". The term trap may refer to any interrupt, to any software interrupt, to any synchronous software interrupt, or only to interrupts caused by instructions with trap in their names. In some usages, the term trap refers specifically to a breakpoint intended to initiate a context switch to a monitor program or debugger. It may also refer to a synchronous interrupt caused by an exceptional condition (e.g., division by zero, invalid memory access, illegal opcode), although the term exception is more common for this.", "title": "Types" }, { "paragraph_id": 23, "text": "x86 divides interrupts into (hardware) interrupts and software exceptions, and identifies three types of exceptions: faults, traps, and aborts. (Hardware) interrupts are interrupts triggered asynchronously by an I/O device, and allow the program to be restarted with no loss of continuity. A fault is restartable as well but is tied to the synchronous execution of an instruction - the return address points to the faulting instruction. A trap is similar to a fault except that the return address points to the instruction to be executed after the trapping instruction; one prominent use is to implement system calls. An abort is used for severe errors, such as hardware errors and illegal values in system tables, and often does not allow a restart of the program.", "title": "Types" }, { "paragraph_id": 24, "text": "Arm uses the term exception to refer to all types of interrupts, and divides exceptions into (hardware) interrupts, aborts, reset, and exception-generating instructions. Aborts correspond to x86 exceptions and may be prefetch aborts (failed instruction fetches) or data aborts (failed data accesses), and may be synchronous or asynchronous. Asynchronous aborts may be precise or imprecise. MMU aborts (page faults) are synchronous.", "title": "Types" }, { "paragraph_id": 25, "text": "RISC-V uses interrupt as the overall term as well as for the external subset, the internal ones are called exceptions.", "title": "Types" }, { "paragraph_id": 26, "text": "Each interrupt signal input is designed to be triggered by either a logic signal level or a particular signal edge (level transition). Level-sensitive inputs continuously request processor service so long as a particular (high or low) logic level is applied to the input. Edge-sensitive inputs react to signal edges: a particular (rising or falling) edge will cause a service request to be latched; the processor resets the latch when the interrupt handler executes.", "title": "Triggering methods" }, { "paragraph_id": 27, "text": "A level-triggered interrupt is requested by holding the interrupt signal at its particular (high or low) active logic level. A device invokes a level-triggered interrupt by driving the signal to and holding it at the active level. It negates the signal when the processor commands it to do so, typically after the device has been serviced.", "title": "Triggering methods" }, { "paragraph_id": 28, "text": "The processor samples the interrupt input signal during each instruction cycle. The processor will recognize the interrupt request if the signal is asserted when sampling occurs.", "title": "Triggering methods" }, { "paragraph_id": 29, "text": "Level-triggered inputs allow multiple devices to share a common interrupt signal via wired-OR connections. The processor polls to determine which devices are requesting service. After servicing a device, the processor may again poll and, if necessary, service other devices before exiting the ISR.", "title": "Triggering methods" }, { "paragraph_id": 30, "text": "An edge-triggered interrupt is an interrupt signaled by a level transition on the interrupt line, either a falling edge (high to low) or a rising edge (low to high). A device wishing to signal an interrupt drives a pulse onto the line and then releases the line to its inactive state. If the pulse is too short to be detected by polled I/O then special hardware may be required to detect it. The important part of edge triggering is that the signal must transition to trigger the interrupt; for example, if the signal was high-low-low, there would only be one falling edge interrupt triggered, and the continued low level would not trigger a further interrupt. The signal must return to the high level and fall again in order to trigger a further interrupt. This contrasts with a level trigger where the low level would continue to create interrupts (if they are enabled) until the signal returns to its high level.", "title": "Triggering methods" }, { "paragraph_id": 31, "text": "Computers with edge-triggered interrupts may include an interrupt register that retains the status of pending interrupts. Systems with interrupt registers generally have interrupt mask registers as well.", "title": "Triggering methods" }, { "paragraph_id": 32, "text": "The processor samples the interrupt trigger signals or interrupt register during each instruction cycle, and will process the highest priority enabled interrupt found. Regardless of the triggering method, the processor will begin interrupt processing at the next instruction boundary following a detected trigger, thus ensuring:", "title": "Processor response" }, { "paragraph_id": 33, "text": "There are several different architectures for handling interrupts. In some, there is a single interrupt handler that must scan for the highest priority enabled interrupt. In others, there are separate interrupt handlers for separate interrupt types, separate I/O channels or devices, or both. Several interrupt causes may have the same interrupt type and thus the same interrupt handler, requiring the interrupt handler to determine the cause.", "title": "Processor response" }, { "paragraph_id": 34, "text": "Interrupts may be fully handled in hardware by the CPU, or may be handled by both the CPU and another component such as a programmable interrupt controller or a southbridge.", "title": "System implementation" }, { "paragraph_id": 35, "text": "If an additional component is used, that component would be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines typically available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space.", "title": "System implementation" }, { "paragraph_id": 36, "text": "In systems on a chip (SoC) implementations, interrupts come from different blocks of the chip and are usually aggregated in an interrupt controller attached to one or several processors (in a multi-core system).", "title": "System implementation" }, { "paragraph_id": 37, "text": "Multiple devices may share an edge-triggered interrupt line if they are designed to. The interrupt line must have a pull-down or pull-up resistor so that when not actively driven it settles to its inactive state, which is the default state of it. Devices signal an interrupt by briefly driving the line to its non-default state, and let the line float (do not actively drive it) when not signaling an interrupt. This type of connection is also referred to as open collector. The line then carries all the pulses generated by all the devices. (This is analogous to the pull cord on some buses and trolleys that any passenger can pull to signal the driver that they are requesting a stop.) However, interrupt pulses from different devices may merge if they occur close in time. To avoid losing interrupts the CPU must trigger on the trailing edge of the pulse (e.g. the rising edge if the line is pulled up and driven low). After detecting an interrupt the CPU must check all the devices for service requirements.", "title": "System implementation" }, { "paragraph_id": 38, "text": "Edge-triggered interrupts do not suffer the problems that level-triggered interrupts have with sharing. Service of a low-priority device can be postponed arbitrarily, while interrupts from high-priority devices continue to be received and get serviced. If there is a device that the CPU does not know how to service, which may raise spurious interrupts, it will not interfere with interrupt signaling of other devices. However, it is easy for an edge-triggered interrupt to be missed - for example, when interrupts are masked for a period - and unless there is some type of hardware latch that records the event it is impossible to recover. This problem caused many \"lockups\" in early computer hardware because the processor did not know it was expected to do something. More modern hardware often has one or more interrupt status registers that latch interrupts requests; well-written edge-driven interrupt handling code can check these registers to ensure no events are missed.", "title": "System implementation" }, { "paragraph_id": 39, "text": "The elderly Industry Standard Architecture (ISA) bus uses edge-triggered interrupts, without mandating that devices be able to share IRQ lines, but all mainstream ISA motherboards include pull-up resistors on their IRQ lines, so well-behaved ISA devices sharing IRQ lines should just work fine. The parallel port also uses edge-triggered interrupts. Many older devices assume that they have exclusive use of IRQ lines, making it electrically unsafe to share them.", "title": "System implementation" }, { "paragraph_id": 40, "text": "There are 3 ways multiple devices \"sharing the same line\" can be raised. First is by exclusive conduction (switching) or exclusive connection (to pins). Next is by bus (all connected to the same line listening): cards on a bus must know when they are to talk and not talk (i.e., the ISA bus). Talking can be triggered in two ways: by accumulation latch or by logic gates. Logic gates expect a continual data flow that is monitored for key signals. Accumulators only trigger when the remote side excites the gate beyond a threshold, thus no negotiated speed is required. Each has its speed versus distance advantages. A trigger, generally, is the method in which excitation is detected: rising edge, falling edge, threshold (oscilloscope can trigger a wide variety of shapes and conditions).", "title": "System implementation" }, { "paragraph_id": 41, "text": "Triggering for software interrupts must be built into the software (both in OS and app). A 'C' app has a trigger table (a table of functions) in its header, which both the app and OS know of and use appropriately that is not related to hardware. However do not confuse this with hardware interrupts which signal the CPU (the CPU enacts software from a table of functions, similarly to software interrupts).", "title": "System implementation" }, { "paragraph_id": 42, "text": "Multiple devices sharing an interrupt line (of any triggering style) all act as spurious interrupt sources with respect to each other. With many devices on one line, the workload in servicing interrupts grows in proportion to the square of the number of devices. It is therefore preferred to spread devices evenly across the available interrupt lines. Shortage of interrupt lines is a problem in older system designs where the interrupt lines are distinct physical conductors. Message-signaled interrupts, where the interrupt line is virtual, are favored in new system architectures (such as PCI Express) and relieve this problem to a considerable extent.", "title": "System implementation" }, { "paragraph_id": 43, "text": "Some devices with a poorly designed programming interface provide no way to determine whether they have requested service. They may lock up or otherwise misbehave if serviced when they do not want it. Such devices cannot tolerate spurious interrupts, and so also cannot tolerate sharing an interrupt line. ISA cards, due to often cheap design and construction, are notorious for this problem. Such devices are becoming much rarer, as hardware logic becomes cheaper and new system architectures mandate shareable interrupts.", "title": "System implementation" }, { "paragraph_id": 44, "text": "Some systems use a hybrid of level-triggered and edge-triggered signaling. The hardware not only looks for an edge, but it also verifies that the interrupt signal stays active for a certain period of time.", "title": "System implementation" }, { "paragraph_id": 45, "text": "A common use of a hybrid interrupt is for the NMI (non-maskable interrupt) input. Because NMIs generally signal major – or even catastrophic – system events, a good implementation of this signal tries to ensure that the interrupt is valid by verifying that it remains active for a period of time. This 2-step approach helps to eliminate false interrupts from affecting the system.", "title": "System implementation" }, { "paragraph_id": 46, "text": "A message-signaled interrupt does not use a physical interrupt line. Instead, a device signals its request for service by sending a short message over some communications medium, typically a computer bus. The message might be of a type reserved for interrupts, or it might be of some pre-existing type such as a memory write.", "title": "System implementation" }, { "paragraph_id": 47, "text": "Message-signalled interrupts behave very much like edge-triggered interrupts, in that the interrupt is a momentary signal rather than a continuous condition. Interrupt-handling software treats the two in much the same manner. Typically, multiple pending message-signaled interrupts with the same message (the same virtual interrupt line) are allowed to merge, just as closely spaced edge-triggered interrupts can merge.", "title": "System implementation" }, { "paragraph_id": 48, "text": "Message-signalled interrupt vectors can be shared, to the extent that the underlying communication medium can be shared. No additional effort is required.", "title": "System implementation" }, { "paragraph_id": 49, "text": "Because the identity of the interrupt is indicated by a pattern of data bits, not requiring a separate physical conductor, many more distinct interrupts can be efficiently handled. This reduces the need for sharing. Interrupt messages can also be passed over a serial bus, not requiring any additional lines.", "title": "System implementation" }, { "paragraph_id": 50, "text": "PCI Express, a serial computer bus, uses message-signaled interrupts exclusively.", "title": "System implementation" }, { "paragraph_id": 51, "text": "In a push button analogy applied to computer systems, the term doorbell or doorbell interrupt is often used to describe a mechanism whereby a software system can signal or notify a computer hardware device that there is some work to be done. Typically, the software system will place data in some well-known and mutually agreed upon memory locations, and \"ring the doorbell\" by writing to a different memory location. This different memory location is often called the doorbell region, and there may even be multiple doorbells serving different purposes in this region. It is this act of writing to the doorbell region of memory that \"rings the bell\" and notifies the hardware device that the data are ready and waiting. The hardware device would now know that the data are valid and can be acted upon. It would typically write the data to a hard disk drive, or send them over a network, or encrypt them, etc.", "title": "System implementation" }, { "paragraph_id": 52, "text": "The term doorbell interrupt is usually a misnomer. It is similar to an interrupt, because it causes some work to be done by the device; however, the doorbell region is sometimes implemented as a polled region, sometimes the doorbell region writes through to physical device registers, and sometimes the doorbell region is hardwired directly to physical device registers. When either writing through or directly to physical device registers, this may cause a real interrupt to occur at the device's central processor unit (CPU), if it has one.", "title": "System implementation" }, { "paragraph_id": 53, "text": "Doorbell interrupts can be compared to Message Signaled Interrupts, as they have some similarities.", "title": "System implementation" }, { "paragraph_id": 54, "text": "In multiprocessor systems, a processor may send an interrupt request to another processor via inter-processor interrupts (IPI).", "title": "System implementation" }, { "paragraph_id": 55, "text": "Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies. The phenomenon where the overall system performance is severely hindered by excessive amounts of processing time spent handling interrupts is called an interrupt storm.", "title": "Performance" }, { "paragraph_id": 56, "text": "There are various forms of livelocks, when the system spends all of its time processing interrupts to the exclusion of other required tasks. Under extreme conditions, a large number of interrupts (like very high network traffic) may completely stall the system. To avoid such problems, an operating system must schedule network interrupt handling as carefully as it schedules process execution.", "title": "Performance" }, { "paragraph_id": 57, "text": "With multi-core processors, additional performance improvements in interrupt handling can be achieved through receive-side scaling (RSS) when multiqueue NICs are used. Such NICs provide multiple receive queues associated to separate interrupts; by routing each of those interrupts to different cores, processing of the interrupt requests triggered by the network traffic received by a single NIC can be distributed among multiple cores. Distribution of the interrupts among cores can be performed automatically by the operating system, or the routing of interrupts (usually referred to as IRQ affinity) can be manually configured.", "title": "Performance" }, { "paragraph_id": 58, "text": "A purely software-based implementation of the receiving traffic distribution, known as receive packet steering (RPS), distributes received traffic among cores later in the data path, as part of the interrupt handler functionality. Advantages of RPS over RSS include no requirements for specific hardware, more advanced traffic distribution filters, and reduced rate of interrupts produced by a NIC. As a downside, RPS increases the rate of inter-processor interrupts (IPIs). Receive flow steering (RFS) takes the software-based approach further by accounting for application locality; further performance improvements are achieved by processing interrupt requests by the same cores on which particular network packets will be consumed by the targeted application.", "title": "Performance" }, { "paragraph_id": 59, "text": "Interrupts are commonly used to service hardware timers, transfer data to and from storage (e.g., disk I/O) and communication interfaces (e.g., UART, Ethernet), handle keyboard and mouse events, and to respond to any other time-sensitive events as required by the application system. Non-maskable interrupts are typically used to respond to high-priority requests such as watchdog timer timeouts, power-down signals and traps.", "title": "Typical uses" }, { "paragraph_id": 60, "text": "Hardware timers are often used to generate periodic interrupts. In some applications, such interrupts are counted by the interrupt handler to keep track of absolute or elapsed time, or used by the OS task scheduler to manage execution of running processes, or both. Periodic interrupts are also commonly used to invoke sampling from input devices such as analog-to-digital converters, incremental encoder interfaces, and GPIO inputs, and to program output devices such as digital-to-analog converters, motor controllers, and GPIO outputs.", "title": "Typical uses" }, { "paragraph_id": 61, "text": "A disk interrupt signals the completion of a data transfer from or to the disk peripheral; this may cause a process to run which is waiting to read or write. A power-off interrupt predicts imminent loss of power, allowing the computer to perform an orderly shut-down while there still remains enough power to do so. Keyboard interrupts typically cause keystrokes to be buffered so as to implement typeahead.", "title": "Typical uses" }, { "paragraph_id": 62, "text": "Interrupts are sometimes used to emulate instructions which are unimplemented on some computers in a product family. For example floating point instructions may be implemented in hardware on some systems and emulated on lower-cost systems. In the latter case, execution of an unimplemented floating point instruction will cause an \"illegal instruction\" exception interrupt. The interrupt handler will implement the floating point function in software and then return to the interrupted program as if the hardware-implemented instruction had been executed. This provides application software portability across the entire line.", "title": "Typical uses" }, { "paragraph_id": 63, "text": "Interrupts are similar to signals, the difference being that signals are used for inter-process communication (IPC), mediated by the kernel (possibly via system calls) and handled by processes, while interrupts are mediated by the processor and handled by the kernel. The kernel may pass an interrupt as a signal to the process that caused it (typical examples are SIGSEGV, SIGBUS, SIGILL and SIGFPE).", "title": "Typical uses" } ]
In digital computers, an interrupt is a request for the processor to interrupt currently executing code, so that the event can be processed in a timely manner. If the request is accepted, the processor will suspend its current activities, save its state, and execute a function called an interrupt handler to deal with the event. This interruption is often temporary, allowing the software to resume normal activities after the interrupt handler finishes, although the interrupt could instead indicate a fatal error. Interrupts are commonly used by hardware devices to indicate electronic or physical state changes that require time-sensitive attention. Interrupts are also commonly used to implement computer multitasking and systems calls, especially in real-time computing. Systems that use interrupts in these ways are said to be interrupt-driven.
2001-11-28T13:53:01Z
2023-12-31T11:04:02Z
[ "Template:Further", "Template:POV section", "Template:Div col", "Template:Reflist", "Template:Cite manual", "Template:Operating system", "Template:About", "Template:OS", "Template:Main", "Template:Unreferenced section", "Template:Cite web", "Template:Authority control", "Template:More citations needed", "Template:Efn", "Template:Anchor", "Template:Cn", "Template:Cite book", "Template:Wiktionary", "Template:Short description", "Template:Div col end", "Template:Notelist", "Template:Cite journal", "Template:Citation", "Template:Portal" ]
https://en.wikipedia.org/wiki/Interrupt
15,290
Intercalation (timekeeping)
Intercalation or embolism in timekeeping is the insertion of a leap day, week, or month into some calendar years to make the calendar follow the seasons or moon phases. Lunisolar calendars may require intercalations of both days and months. The solar or tropical year does not have a whole number of days (it is about 365.24 days), but a calendar year must have a whole number of days. The most common way to reconcile the two is to vary the number of days in the calendar year. In solar calendars, this is done by adding to a common year of 365 days, an extra day ("leap day" or "intercalary day") about every four years, causing a leap year to have 366 days (Julian, Gregorian and Indian national calendars). The Decree of Canopus, which was issued by the pharaoh Ptolemy III Euergetes of Ancient Egypt in 239 BCE, decreed a solar leap day system; an Egyptian leap year was not adopted until 25 BC, when the Roman Emperor Augustus successfully instituted a reformed Alexandrian calendar. In the Julian calendar, as well as in the Gregorian calendar, which improved upon it, intercalation is done by adding an extra day to February in each leap year. In the Julian calendar this was done every four years. In the Gregorian, years divisible by 100 but not 400 were exempted in order to improve accuracy. Thus, 2000 was a leap year; 1700, 1800, and 1900 were not. Epagomenal days are days within a solar calendar that are outside any regular month. Usually five epagomenal days are included within every year (Egyptian, Coptic, Ethiopian, Mayan Haab' and French Republican Calendars), but a sixth epagomenal day is intercalated every four years in some (Coptic, Ethiopian and French Republican calendars). The Solar Hijri calendar, used in Iran, is based on solar calculations and is similar to the Gregorian calendar in its structure, and hence the intercalation, with the exception that its epoch the Hijrah. The Bahá'í calendar includes enough epagomenal days (usually 4 or 5) before the last month (علاء, ʿalāʾ) to ensure that the following year starts on the March equinox. These are known as the Ayyám-i-Há. The solar year does not have a whole number of lunar months (it is about 365/29.5 = 12.37 lunations), so a lunisolar calendar must have a variable number of months in a year. Regular years have 12 months, but embolismic years insert a 13th "intercalary" or "leap" or "embolismic" month every second or third year (see blue moon). Whether to insert an intercalary month in a given year may be determined using regular cycles such as the 19-year Metonic cycle (Hebrew calendar and in the determination of Easter) or using calculations of lunar phases (Hindu lunisolar and Chinese calendars). The Buddhist calendar adds both an intercalary day and month on a usually regular cycle. In principle, lunar calendars do not employ intercalation because they do not seek to synchronise with the seasons and the motion of the moon is astronomically predictable. However, religious lunar calendars rely on actual observation. The Lunar Hijri calendar, the purely lunar calendar observed by most of Islam, depends on actual observation of the first crescent of the moon and consequently does not have any intercalation. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of either 29 or 30-day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th. The tabular Islamic calendar, used in Iran, has 12 lunar months that usually alternate between 30 and 29 days every year, but an intercalary day is added to the last month of the year 12 times within a 33-year cycle. Some historians also linked the pre-Islamic practice of Nasi' to intercalation. The International Earth Rotation and Reference Systems Service can insert or remove leap seconds from the last day of any month (June and December are preferred). These are sometimes described as intercalary. ISO 8601 includes a specification for a 52/53-week year. Any year that has 53 Thursdays has 53 weeks; this extra week may be regarded as intercalary. The xiuhpōhualli (year count) system of the Aztec calendar had five intercalary days after the eighteenth and final month, the nēmontēmi, in which the people reflect on the past year and do fasting.
[ { "paragraph_id": 0, "text": "Intercalation or embolism in timekeeping is the insertion of a leap day, week, or month into some calendar years to make the calendar follow the seasons or moon phases. Lunisolar calendars may require intercalations of both days and months.", "title": "" }, { "paragraph_id": 1, "text": "The solar or tropical year does not have a whole number of days (it is about 365.24 days), but a calendar year must have a whole number of days. The most common way to reconcile the two is to vary the number of days in the calendar year.", "title": "Solar calendars" }, { "paragraph_id": 2, "text": "In solar calendars, this is done by adding to a common year of 365 days, an extra day (\"leap day\" or \"intercalary day\") about every four years, causing a leap year to have 366 days (Julian, Gregorian and Indian national calendars).", "title": "Solar calendars" }, { "paragraph_id": 3, "text": "The Decree of Canopus, which was issued by the pharaoh Ptolemy III Euergetes of Ancient Egypt in 239 BCE, decreed a solar leap day system; an Egyptian leap year was not adopted until 25 BC, when the Roman Emperor Augustus successfully instituted a reformed Alexandrian calendar.", "title": "Solar calendars" }, { "paragraph_id": 4, "text": "In the Julian calendar, as well as in the Gregorian calendar, which improved upon it, intercalation is done by adding an extra day to February in each leap year. In the Julian calendar this was done every four years. In the Gregorian, years divisible by 100 but not 400 were exempted in order to improve accuracy. Thus, 2000 was a leap year; 1700, 1800, and 1900 were not.", "title": "Solar calendars" }, { "paragraph_id": 5, "text": "Epagomenal days are days within a solar calendar that are outside any regular month. Usually five epagomenal days are included within every year (Egyptian, Coptic, Ethiopian, Mayan Haab' and French Republican Calendars), but a sixth epagomenal day is intercalated every four years in some (Coptic, Ethiopian and French Republican calendars).", "title": "Solar calendars" }, { "paragraph_id": 6, "text": "The Solar Hijri calendar, used in Iran, is based on solar calculations and is similar to the Gregorian calendar in its structure, and hence the intercalation, with the exception that its epoch the Hijrah.", "title": "Solar calendars" }, { "paragraph_id": 7, "text": "The Bahá'í calendar includes enough epagomenal days (usually 4 or 5) before the last month (علاء, ʿalāʾ) to ensure that the following year starts on the March equinox. These are known as the Ayyám-i-Há.", "title": "Solar calendars" }, { "paragraph_id": 8, "text": "The solar year does not have a whole number of lunar months (it is about 365/29.5 = 12.37 lunations), so a lunisolar calendar must have a variable number of months in a year. Regular years have 12 months, but embolismic years insert a 13th \"intercalary\" or \"leap\" or \"embolismic\" month every second or third year (see blue moon). Whether to insert an intercalary month in a given year may be determined using regular cycles such as the 19-year Metonic cycle (Hebrew calendar and in the determination of Easter) or using calculations of lunar phases (Hindu lunisolar and Chinese calendars). The Buddhist calendar adds both an intercalary day and month on a usually regular cycle.", "title": "Lunisolar calendars" }, { "paragraph_id": 9, "text": "In principle, lunar calendars do not employ intercalation because they do not seek to synchronise with the seasons and the motion of the moon is astronomically predictable. However, religious lunar calendars rely on actual observation.", "title": "Lunar calendars" }, { "paragraph_id": 10, "text": "The Lunar Hijri calendar, the purely lunar calendar observed by most of Islam, depends on actual observation of the first crescent of the moon and consequently does not have any intercalation. Each month still has either 29 or 30 days, but due to the variable method of observations employed, there is usually no discernible order in the sequencing of either 29 or 30-day month lengths. Traditionally, the first day of each month is the day (beginning at sunset) of the first sighting of the hilal (crescent moon) shortly after sunset. If the hilal is not observed immediately after the 29th day of a month (either because clouds block its view or because the western sky is still too bright when the moon sets), then the day that begins at that sunset is the 30th.", "title": "Lunar calendars" }, { "paragraph_id": 11, "text": "The tabular Islamic calendar, used in Iran, has 12 lunar months that usually alternate between 30 and 29 days every year, but an intercalary day is added to the last month of the year 12 times within a 33-year cycle. Some historians also linked the pre-Islamic practice of Nasi' to intercalation.", "title": "Lunar calendars" }, { "paragraph_id": 12, "text": "The International Earth Rotation and Reference Systems Service can insert or remove leap seconds from the last day of any month (June and December are preferred). These are sometimes described as intercalary.", "title": "Leap seconds" }, { "paragraph_id": 13, "text": "ISO 8601 includes a specification for a 52/53-week year. Any year that has 53 Thursdays has 53 weeks; this extra week may be regarded as intercalary.", "title": "Other uses" }, { "paragraph_id": 14, "text": "The xiuhpōhualli (year count) system of the Aztec calendar had five intercalary days after the eighteenth and final month, the nēmontēmi, in which the people reflect on the past year and do fasting.", "title": "Other uses" } ]
Intercalation or embolism in timekeeping is the insertion of a leap day, week, or month into some calendar years to make the calendar follow the seasons or moon phases. Lunisolar calendars may require intercalations of both days and months.
2001-11-28T14:03:19Z
2023-11-17T03:52:20Z
[ "Template:More", "Template:Cite web", "Template:Time Topics", "Template:Time measurement and standards", "Template:Morerefs", "Template:Wiktionary", "Template:Further", "Template:Cite EB1911", "Template:Short description", "Template:Lang", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Intercalation_(timekeeping)
15,291
Intercourse
When a man puts him dingaling to make hidden valley ranch and have a free slav
[ { "paragraph_id": 0, "text": "When a man puts him dingaling to make hidden valley ranch and have a free slav", "title": "" } ]
When a man puts him dingaling to make hidden valley ranch and have a free slav
2002-02-25T15:43:11Z
2023-10-10T11:31:01Z
[ "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Intercourse
15,292
Ink
Ink is a gel, sol, or solution that contains at least one colorant, such as a dye or pigment, and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, reed pen, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing. Ink can be a complex medium, composed of solvents, pigments, dyes, resins, lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink's carrier, colorants, and other additives affect the flow and thickness of the ink and its dry appearance. Many ancient cultures around the world have independently discovered and formulated inks for the purposes of writing and drawing. The knowledge of the inks, their recipes and the techniques for their production comes from archaeological analysis or from written text itself. The earliest inks from all civilizations are believed to have been made with lampblack, a kind of soot, as this would have been easily collected as a by-product of fire. Ink was used in Ancient Egypt for writing and drawing on papyrus from at least the 26th century BC. Egyptian red and black inks included iron and ocher as a pigment, in addition to phosphate, sulfate, chloride, and carboxylate ions; meanwhile, lead was used as a drier. Chinese inks may go back as far as four millennia, to the Chinese Neolithic Period. These used plants, animal, and mineral inks based on such materials as graphite that were ground with water and applied with ink brushes. Direct evidence for the earliest Chinese inks, similar to modern inksticks, is around 256 BC in the end of the Warring States period and produced from soot and animal glue. The best inks for drawing or painting on paper or silk are produced from the resin of the pine tree. They must be between 50 and 100 years old. The Chinese inkstick is produced with a fish glue, whereas Japanese glue (膠 nikawa) is from cow or stag. India ink was invented in China, though materials were often traded from India, hence the name. The traditional Chinese method of making the ink was to grind a mixture of hide glue, carbon black, lampblack, and bone black pigment with a pestle and mortar, then pour it into a ceramic dish to dry. To use the dry mixture, a wet brush would be applied until it reliquified. The manufacture of India ink was well-established by the Cao Wei dynasty (220–265 AD). Indian documents written in Kharosthi with ink have been unearthed in Xinjiang. The practice of writing with ink and a sharp pointed needle was common in early South India. Several Buddhist and Jain sutras in India were compiled in ink. Cephalopod ink, known as sepia, turns from dark blue-black to brown on drying, and was used as an ink in the Graeco-Roman period and subsequently. Black atramentum was also used in ancient Rome; in an article for The Christian Science Monitor, Sharon J. Huntington describes these other historical inks: About 1,600 years ago, a popular ink recipe was created. The recipe was used for centuries. Iron salts, such as ferrous sulfate (made by treating iron with sulfuric acid), were mixed with tannin from gallnuts (they grow on trees) and a thickener. When first put to paper, this ink is bluish-black. Over time it fades to a dull brown. Scribes in medieval Europe (about AD 800 to 1500) wrote principally on parchment or vellum. One 12th century ink recipe called for hawthorn branches to be cut in the spring and left to dry. Then the bark was pounded from the branches and soaked in water for eight days. The water was boiled until it thickened and turned black. Wine was added during boiling. The ink was poured into special bags and hung in the sun. Once dried, the mixture was mixed with wine and iron salt over a fire to make the final ink. The reservoir pen, which may have been the first fountain pen, dates back to 953, when Ma'ād al-Mu'izz, the caliph of Egypt, demanded a pen that would not stain his hands or clothes, and was provided with a pen that held ink in a reservoir. In the 15th century, a new type of ink had to be developed in Europe for the printing press by Johannes Gutenberg. According to Martyn Lyons in his book Books: A Living History, Gutenberg's dye was indelible, oil-based, and made from the soot of lamps (lamp-black) mixed with varnish and egg white. Two types of ink were prevalent at the time: the Greek and Roman writing ink (soot, glue, and water) and the 12th century variety composed of ferrous sulfate, gall, gum, and water. Neither of these handwriting inks could adhere to printing surfaces without creating blurs. Eventually an oily, varnish-like ink made of soot, turpentine, and walnut oil was created specifically for the printing press. Ink formulas vary, but commonly involve two components: Inks generally fall into four classes: Pigment inks are used more frequently than dyes because they are more color-fast, but they are also more expensive, less consistent in color, and have less of a color range than dyes. Pigments are solid, opaque particles suspended in ink to provide color. Pigment molecules typically link together in crystalline structures that are 0.1–2 µm in size and comprise 5–30 percent of the ink volume. Qualities such as hue, saturation, and lightness vary depending on the source and type of pigment. Dye-based inks are generally much stronger than pigment-based inks and can produce much more color of a given density per unit of mass. However, because dyes are dissolved in the liquid phase, they have a tendency to soak into paper, potentially allowing the ink to bleed at the edges of an image. To circumvent this problem, dye-based inks are made with solvents that dry rapidly or are used with quick-drying methods of printing, such as blowing hot air on the fresh print. Other methods include harder paper sizing and more specialized paper coatings. The latter is particularly suited to inks used in non-industrial settings (which must conform to tighter toxicity and emission controls), such as inkjet printer inks. Another technique involves coating the paper with a charged coating. If the dye has the opposite charge, it is attracted to and retained by this coating, while the solvent soaks into the paper. Cellulose, the wood-derived material most paper is made of, is naturally charged, and so a compound that complexes with both the dye and the paper's surface aids retention at the surface. Such a compound is commonly used in ink-jet printing inks. An additional advantage of dye-based ink systems is that the dye molecules can interact with other ink ingredients, potentially allowing greater benefit as compared to pigmented inks from optical brighteners and color-enhancing agents designed to increase the intensity and appearance of dyes. Dye-based inks can be used for anti-counterfeit purposes and can be found in some gel inks, fountain pen inks, and inks used for paper currency. These inks react with cellulose to bring about a permanent color change. Dye based inks are used to color hair. There is a misconception that ink is non-toxic even if swallowed. Once ingested, ink can be hazardous to one's health. Certain inks, such as those used in digital printers, and even those found in a common pen can be harmful. Though ink does not easily cause death, repeated skin contact or ingestion can cause effects such as severe headaches, skin irritation, or nervous system damage. These effects can be caused by solvents, or by pigment ingredients such as p-Anisidine, which helps create some inks' color and shine. Three main environmental issues with ink are: Some regulatory bodies have set standards for the amount of heavy metals in ink. There is a trend toward vegetable oils rather than petroleum oils in recent years in response to a demand for better environmental sustainability performance. Ink uses up non-renewable oils and metals, which has a negative impact on the environment. Carbon inks were commonly made from lampblack or soot and a binding agent such as gum arabic or animal glue. The binding agent keeps carbon particles in suspension and adhered to paper. Carbon particles do not fade over time even when bleached or when in sunlight. One benefit is that carbon ink does not harm paper. Over time, the ink is chemically stable and therefore does not threaten the paper's strength. Despite these benefits, carbon ink is not ideal for permanence and ease of preservation. Carbon ink tends to smudge in humid environments and can be washed off surfaces. The best method of preserving a document written in carbon ink is to store it in a dry environment (Barrow 1972). Recently, carbon inks made from carbon nanotubes have been successfully created. They are similar in composition to traditional inks in that they use a polymer to suspend the carbon nanotubes. These inks can be used in inkjet printers and produce electrically conductive patterns. Iron gall inks became prominent in the early 12th century; they were used for centuries and were widely thought to be the best type of ink. However, iron gall ink is corrosive and damages paper over time (Waters 1940). Items containing this ink can become brittle and the writing fades to brown. The original scores of Johann Sebastian Bach are threatened by the destructive properties of iron gall ink. The majority of his works are held by the German State Library, and about 25% of those are in advanced stages of decay (American Libraries 2000). The rate at which the writing fades is based on several factors, such as proportions of ink ingredients, amount deposited on the paper, and paper composition (Barrow 1972:16). Corrosion is caused by acid catalyzed hydrolysis and iron(II)-catalysed oxidation of cellulose (Rouchon-Quillet 2004:389). Treatment is a controversial subject. No treatment undoes damage already caused by acidic ink. Deterioration can only be stopped or slowed. Some think it best not to treat the item at all for fear of the consequences. Others believe that non-aqueous procedures are the best solution. Yet others think an aqueous procedure may preserve items written with iron gall ink. Aqueous treatments include distilled water at different temperatures, calcium hydroxide, calcium bicarbonate, magnesium carbonate, magnesium bicarbonate, and calcium hyphenate. There are many possible side effects from these treatments. There can be mechanical damage, which further weakens the paper. Paper color or ink color may change, and ink may bleed. Other consequences of aqueous treatment are a change of ink texture or formation of plaque on the surface of the ink (Reibland & de Groot 1999). Iron gall inks require storage in a stable environment, because fluctuating relative humidity increases the rate that formic acid, acetic acid, and furan derivatives form in the material the ink was used on. Sulfuric acid acts as a catalyst to cellulose hydrolysis, and iron (II) sulfate acts as a catalyst to cellulose oxidation. These chemical reactions physically weaken the paper, causing brittleness. Indelible means "un-removable". Some types of indelible ink have a very short shelf life because of the quickly evaporating solvents used. India, Mexico, Indonesia, Malaysia and other developing countries have used indelible ink in the form of electoral stain to prevent electoral fraud. Election ink based on silver nitrate was first applied in the 1962 Indian general election, after being developed at the National Physical Laboratory of India. The election commission in India has used indelible ink for many elections. Indonesia used it in its last election in Aceh. In Mali, the ink is applied to the fingernail. Indelible ink itself is not infallible as it can be used to commit electoral fraud by marking opponent party members before they have chances to cast their votes. There are also reports of "indelible" ink washing off voters' fingers in Afghanistan.
[ { "paragraph_id": 0, "text": "Ink is a gel, sol, or solution that contains at least one colorant, such as a dye or pigment, and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, reed pen, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing.", "title": "" }, { "paragraph_id": 1, "text": "Ink can be a complex medium, composed of solvents, pigments, dyes, resins, lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink's carrier, colorants, and other additives affect the flow and thickness of the ink and its dry appearance.", "title": "" }, { "paragraph_id": 2, "text": "Many ancient cultures around the world have independently discovered and formulated inks for the purposes of writing and drawing. The knowledge of the inks, their recipes and the techniques for their production comes from archaeological analysis or from written text itself. The earliest inks from all civilizations are believed to have been made with lampblack, a kind of soot, as this would have been easily collected as a by-product of fire.", "title": "History" }, { "paragraph_id": 3, "text": "Ink was used in Ancient Egypt for writing and drawing on papyrus from at least the 26th century BC. Egyptian red and black inks included iron and ocher as a pigment, in addition to phosphate, sulfate, chloride, and carboxylate ions; meanwhile, lead was used as a drier.", "title": "History" }, { "paragraph_id": 4, "text": "Chinese inks may go back as far as four millennia, to the Chinese Neolithic Period. These used plants, animal, and mineral inks based on such materials as graphite that were ground with water and applied with ink brushes. Direct evidence for the earliest Chinese inks, similar to modern inksticks, is around 256 BC in the end of the Warring States period and produced from soot and animal glue. The best inks for drawing or painting on paper or silk are produced from the resin of the pine tree. They must be between 50 and 100 years old. The Chinese inkstick is produced with a fish glue, whereas Japanese glue (膠 nikawa) is from cow or stag.", "title": "History" }, { "paragraph_id": 5, "text": "India ink was invented in China, though materials were often traded from India, hence the name. The traditional Chinese method of making the ink was to grind a mixture of hide glue, carbon black, lampblack, and bone black pigment with a pestle and mortar, then pour it into a ceramic dish to dry. To use the dry mixture, a wet brush would be applied until it reliquified. The manufacture of India ink was well-established by the Cao Wei dynasty (220–265 AD). Indian documents written in Kharosthi with ink have been unearthed in Xinjiang. The practice of writing with ink and a sharp pointed needle was common in early South India. Several Buddhist and Jain sutras in India were compiled in ink.", "title": "History" }, { "paragraph_id": 6, "text": "Cephalopod ink, known as sepia, turns from dark blue-black to brown on drying, and was used as an ink in the Graeco-Roman period and subsequently. Black atramentum was also used in ancient Rome; in an article for The Christian Science Monitor, Sharon J. Huntington describes these other historical inks:", "title": "History" }, { "paragraph_id": 7, "text": "About 1,600 years ago, a popular ink recipe was created. The recipe was used for centuries. Iron salts, such as ferrous sulfate (made by treating iron with sulfuric acid), were mixed with tannin from gallnuts (they grow on trees) and a thickener. When first put to paper, this ink is bluish-black. Over time it fades to a dull brown.", "title": "History" }, { "paragraph_id": 8, "text": "Scribes in medieval Europe (about AD 800 to 1500) wrote principally on parchment or vellum. One 12th century ink recipe called for hawthorn branches to be cut in the spring and left to dry. Then the bark was pounded from the branches and soaked in water for eight days. The water was boiled until it thickened and turned black. Wine was added during boiling. The ink was poured into special bags and hung in the sun. Once dried, the mixture was mixed with wine and iron salt over a fire to make the final ink.", "title": "History" }, { "paragraph_id": 9, "text": "The reservoir pen, which may have been the first fountain pen, dates back to 953, when Ma'ād al-Mu'izz, the caliph of Egypt, demanded a pen that would not stain his hands or clothes, and was provided with a pen that held ink in a reservoir.", "title": "History" }, { "paragraph_id": 10, "text": "In the 15th century, a new type of ink had to be developed in Europe for the printing press by Johannes Gutenberg. According to Martyn Lyons in his book Books: A Living History, Gutenberg's dye was indelible, oil-based, and made from the soot of lamps (lamp-black) mixed with varnish and egg white. Two types of ink were prevalent at the time: the Greek and Roman writing ink (soot, glue, and water) and the 12th century variety composed of ferrous sulfate, gall, gum, and water. Neither of these handwriting inks could adhere to printing surfaces without creating blurs. Eventually an oily, varnish-like ink made of soot, turpentine, and walnut oil was created specifically for the printing press.", "title": "History" }, { "paragraph_id": 11, "text": "Ink formulas vary, but commonly involve two components:", "title": "Types" }, { "paragraph_id": 12, "text": "Inks generally fall into four classes:", "title": "Types" }, { "paragraph_id": 13, "text": "Pigment inks are used more frequently than dyes because they are more color-fast, but they are also more expensive, less consistent in color, and have less of a color range than dyes. Pigments are solid, opaque particles suspended in ink to provide color. Pigment molecules typically link together in crystalline structures that are 0.1–2 µm in size and comprise 5–30 percent of the ink volume. Qualities such as hue, saturation, and lightness vary depending on the source and type of pigment.", "title": "Types" }, { "paragraph_id": 14, "text": "Dye-based inks are generally much stronger than pigment-based inks and can produce much more color of a given density per unit of mass. However, because dyes are dissolved in the liquid phase, they have a tendency to soak into paper, potentially allowing the ink to bleed at the edges of an image.", "title": "Types" }, { "paragraph_id": 15, "text": "To circumvent this problem, dye-based inks are made with solvents that dry rapidly or are used with quick-drying methods of printing, such as blowing hot air on the fresh print. Other methods include harder paper sizing and more specialized paper coatings. The latter is particularly suited to inks used in non-industrial settings (which must conform to tighter toxicity and emission controls), such as inkjet printer inks. Another technique involves coating the paper with a charged coating. If the dye has the opposite charge, it is attracted to and retained by this coating, while the solvent soaks into the paper. Cellulose, the wood-derived material most paper is made of, is naturally charged, and so a compound that complexes with both the dye and the paper's surface aids retention at the surface. Such a compound is commonly used in ink-jet printing inks.", "title": "Types" }, { "paragraph_id": 16, "text": "An additional advantage of dye-based ink systems is that the dye molecules can interact with other ink ingredients, potentially allowing greater benefit as compared to pigmented inks from optical brighteners and color-enhancing agents designed to increase the intensity and appearance of dyes.", "title": "Types" }, { "paragraph_id": 17, "text": "Dye-based inks can be used for anti-counterfeit purposes and can be found in some gel inks, fountain pen inks, and inks used for paper currency. These inks react with cellulose to bring about a permanent color change. Dye based inks are used to color hair.", "title": "Types" }, { "paragraph_id": 18, "text": "There is a misconception that ink is non-toxic even if swallowed. Once ingested, ink can be hazardous to one's health. Certain inks, such as those used in digital printers, and even those found in a common pen can be harmful. Though ink does not easily cause death, repeated skin contact or ingestion can cause effects such as severe headaches, skin irritation, or nervous system damage. These effects can be caused by solvents, or by pigment ingredients such as p-Anisidine, which helps create some inks' color and shine.", "title": "Health and environmental aspects" }, { "paragraph_id": 19, "text": "Three main environmental issues with ink are:", "title": "Health and environmental aspects" }, { "paragraph_id": 20, "text": "Some regulatory bodies have set standards for the amount of heavy metals in ink. There is a trend toward vegetable oils rather than petroleum oils in recent years in response to a demand for better environmental sustainability performance.", "title": "Health and environmental aspects" }, { "paragraph_id": 21, "text": "Ink uses up non-renewable oils and metals, which has a negative impact on the environment.", "title": "Health and environmental aspects" }, { "paragraph_id": 22, "text": "Carbon inks were commonly made from lampblack or soot and a binding agent such as gum arabic or animal glue. The binding agent keeps carbon particles in suspension and adhered to paper. Carbon particles do not fade over time even when bleached or when in sunlight. One benefit is that carbon ink does not harm paper. Over time, the ink is chemically stable and therefore does not threaten the paper's strength. Despite these benefits, carbon ink is not ideal for permanence and ease of preservation. Carbon ink tends to smudge in humid environments and can be washed off surfaces. The best method of preserving a document written in carbon ink is to store it in a dry environment (Barrow 1972).", "title": "Health and environmental aspects" }, { "paragraph_id": 23, "text": "Recently, carbon inks made from carbon nanotubes have been successfully created. They are similar in composition to traditional inks in that they use a polymer to suspend the carbon nanotubes. These inks can be used in inkjet printers and produce electrically conductive patterns.", "title": "Health and environmental aspects" }, { "paragraph_id": 24, "text": "Iron gall inks became prominent in the early 12th century; they were used for centuries and were widely thought to be the best type of ink. However, iron gall ink is corrosive and damages paper over time (Waters 1940). Items containing this ink can become brittle and the writing fades to brown. The original scores of Johann Sebastian Bach are threatened by the destructive properties of iron gall ink. The majority of his works are held by the German State Library, and about 25% of those are in advanced stages of decay (American Libraries 2000). The rate at which the writing fades is based on several factors, such as proportions of ink ingredients, amount deposited on the paper, and paper composition (Barrow 1972:16). Corrosion is caused by acid catalyzed hydrolysis and iron(II)-catalysed oxidation of cellulose (Rouchon-Quillet 2004:389).", "title": "Health and environmental aspects" }, { "paragraph_id": 25, "text": "Treatment is a controversial subject. No treatment undoes damage already caused by acidic ink. Deterioration can only be stopped or slowed. Some think it best not to treat the item at all for fear of the consequences. Others believe that non-aqueous procedures are the best solution. Yet others think an aqueous procedure may preserve items written with iron gall ink. Aqueous treatments include distilled water at different temperatures, calcium hydroxide, calcium bicarbonate, magnesium carbonate, magnesium bicarbonate, and calcium hyphenate. There are many possible side effects from these treatments. There can be mechanical damage, which further weakens the paper. Paper color or ink color may change, and ink may bleed. Other consequences of aqueous treatment are a change of ink texture or formation of plaque on the surface of the ink (Reibland & de Groot 1999).", "title": "Health and environmental aspects" }, { "paragraph_id": 26, "text": "Iron gall inks require storage in a stable environment, because fluctuating relative humidity increases the rate that formic acid, acetic acid, and furan derivatives form in the material the ink was used on. Sulfuric acid acts as a catalyst to cellulose hydrolysis, and iron (II) sulfate acts as a catalyst to cellulose oxidation. These chemical reactions physically weaken the paper, causing brittleness.", "title": "Health and environmental aspects" }, { "paragraph_id": 27, "text": "Indelible means \"un-removable\". Some types of indelible ink have a very short shelf life because of the quickly evaporating solvents used. India, Mexico, Indonesia, Malaysia and other developing countries have used indelible ink in the form of electoral stain to prevent electoral fraud. Election ink based on silver nitrate was first applied in the 1962 Indian general election, after being developed at the National Physical Laboratory of India.", "title": "Indelible ink" }, { "paragraph_id": 28, "text": "The election commission in India has used indelible ink for many elections. Indonesia used it in its last election in Aceh. In Mali, the ink is applied to the fingernail. Indelible ink itself is not infallible as it can be used to commit electoral fraud by marking opponent party members before they have chances to cast their votes. There are also reports of \"indelible\" ink washing off voters' fingers in Afghanistan.", "title": "Indelible ink" } ]
Ink is a gel, sol, or solution that contains at least one colorant, such as a dye or pigment, and is used to color a surface to produce an image, text, or design. Ink is used for drawing or writing with a pen, brush, reed pen, or quill. Thicker inks, in paste form, are used extensively in letterpress and lithographic printing. Ink can be a complex medium, composed of solvents, pigments, dyes, resins, lubricants, solubilizers, surfactants, particulate matter, fluorescents, and other materials. The components of inks serve many purposes; the ink's carrier, colorants, and other additives affect the flow and thickness of the ink and its dry appearance.
2001-11-28T22:56:46Z
2023-11-20T16:19:06Z
[ "Template:Main", "Template:See also", "Template:More medical citations needed", "Template:Who", "Template:Reflist", "Template:Cite news", "Template:Authority control", "Template:Main article", "Template:Cite journal", "Template:Citation", "Template:Cite web", "Template:Writing", "Template:Short description", "Template:Other uses", "Template:Specify", "Template:Div col", "Template:Div col end", "Template:Cite book", "Template:ISBN", "Template:Sisterlinks" ]
https://en.wikipedia.org/wiki/Ink
15,294
Islamabad Capital Territory
The Islamabad Capital Territory (ICT; Urdu: وفاقی دارالحکومت, romanized: Vafāqī Dār-alhakūmat) is the only federal territory of Pakistan containing Islamabad, the capital of Pakistan. Located on the northern edge of the Pothohar Plateau and at the foot of the Margalla Hills, The ICT shares borders with the province of Khyber Pakhtunkhwa in the west and with the province of Punjab in the remaining directions. It covers an area of 1,165 square kilometres (450 sq mi) and according to the 2023 national census, has a population of over 1 million in the city proper, while over 2 million in the whole territory. The territory is represented in the National Assembly by NA-52, NA-53, and NA-54 constituencies and by four seats in the Senate. In 1960, land was transferred from Rawalpindi District of Punjab province to replace Karachi Federal Capital Territory and establish Pakistan's new capital. According to the 1960s master plan, the Capital Territory included Rawalpindi, and was to be composed of the following parts: However, Rawalpindi was eventually excluded from the Islamabad master plan in the 1980s. Islamabad is subdivided into five zones: Islamabad Capital Territory comprises urban and rural areas. The rural consists of 23 union councils, comprising 133 villages, while urban has 27 union councils. The climate of Islamabad has a humid subtropical climate (Köppen: Cwa), with five seasons: winter (November–February), spring (March and April), summer (May and June), rainy monsoon (July and August), and autumn (September and October). The temperatures range from 13 °C (55 °F) in January to 38 °C (100 °F) in June. The hottest month is June, where average highs routinely exceed 38 °C (100.4 °F), while the coolest month is January. The highest recorded temperature was 46.6 °C (115.9 °F) on 23 June 2005 while the lowest temperature was −6 °C (21.2 °F) on 17 January 1967. Winters generally feature dense fog in the mornings and sunny afternoons. In the city, temperatures stay mild, with snowfall over the higher elevations points on nearby hill stations, notably Murree and Nathia Gali. The wettest month is July, with heavy rainfalls and evening thunderstorms with the possibility of cloudburst and flooding. Highest monthly rainfall of 743.3 millimetres (29.26 in) was recorded during July 1995. On 23 July 2001, Islamabad received a record breaking 620 millimetres (24 in) of rainfall in just 10 hours. It was the heaviest rainfall in Islamabad in the past 100 years and the highest rainfall in 24 hours as well. The city has also experienced snowfall on a number of occasions. Islamabad's micro-climate is regulated by three artificial reservoirs: Rawal, Simli, and Khanpur Dam. The latter is located on the Haro River near the town of Khanpur, about 40 kilometres (25 mi) from Islamabad. Simli Dam is 30 kilometres (19 mi) north of Islamabad. Around 220 acres (89 ha) of the city consists of the Margalla Hills National Park, while the Loi Bher Forest is situated along the Islamabad Highway, covering an area of 1,087 acres (440 ha). The main administrative authority of the city is Islamabad Capital Territory Administration with some help from Metropolitan Corporation Islamabad and Capital Development Authority (CDA), which oversees the planning, development, construction, and administration of the city. Islamabad Capital Territory is divided into eight zones: Administrative Zone, Commercial District, Educational Sector, Industrial Sector, Diplomatic Enclave, Residential Areas, Rural Areas and Green Area. Islamabad city is divided into five major zones: Zone I, Zone II, Zone III, Zone IV, and Zone V. Out of these, Zone IV is the largest in area. All sectors of Ghouri Town (1, 2, 3, VIP, 5, 4-A, 4-B, 4-C, 5-A, 5-B and sector 7) are located in this zone. Zone I consists mainly of all the developed residential sectors, while Zone II consists of the under-developed residential sectors. Each residential sector is identified by a letter of the alphabet and a number, and covers an area of approximately 4 square kilometres. The sectors are lettered from A to I, and each sector is divided into four numbered sub-sectors. Series A, B, and C are still underdeveloped. The D series has seven sectors (D-11 to D-17), of which only sector D-12 is completely developed. This series is located at the foot of Margalla Hills. The E Sectors are named from E-7 to E-17. Many foreigners and diplomatic personnel are housed in these sectors. In the revised Master Plan of the city, CDA has decided to develop a park on the pattern of Fatima Jinnah Park in sector E-14. Sectors E-8 and E-9 contain the campuses of Bahria University, Air University, and the National Defence University. The F and G series contains the most developed sectors. F series contains sectors F-5 to F-17; some sectors are still under-developed. F-5 is an important sector for the software industry in Islamabad, as the two software technology parks are located here. The entire F-9 sector is covered with Fatima Jinnah Park. The Centaurus complex will be one of the major landmarks of the F-8 sector. G sectors are numbered G-5 through G-17. Some important places include the Jinnah Convention Center and Serena Hotel in G-5, the Red Mosque in G-6, and the Pakistan Institute of Medical Sciences, the largest medical complex in the capital, located in G-8. The H sectors are numbered H-8 through H-17. The H sectors are mostly dedicated to educational and health institutions. National University of Sciences and Technology covers a major portion of sector H-12. The I sectors are numbered from I-8 to I-18. With the exception of I-8, which is a well-developed residential area, these sectors are primarily part of the industrial zone. Currently two sub-sectors of I-9 and one sub-sector of I-10 are used as industrial areas. CDA is planning to set up Islamabad Railway Station in Sector I-18 and Industrial City in sector I-17. Zone III consists primarily of the Margalla Hills and Margalla Hills National Park. Rawal Lake is in this zone. Zone IV and V consist of Islamabad Park, and rural areas of the city. The Soan River flows into the city through Zone V. While urban Islamabad is home to people from all over Pakistan as well as expatriates, in the rural areas a number of Pothohari speaking tribal communities can still be recognised. Religion in Islamabad Territory, Pakistan When the master plan for Islamabad was drawn up in 1960, Islamabad and Rawalpindi, along with the adjoining areas, was to be integrated to form a large metropolitan area called Islamabad/Rawalpindi Metropolitan Area. The area would consist of the developing Islamabad, the old colonial cantonment city of Rawalpindi, and Margalla Hills National Park, including surrounding rural areas. However, Islamabad city is part of the Islamabad Capital Territory, while Rawalpindi is part of Rawalpindi District, which is part of province of Punjab. Initially, it was proposed that the three areas would be connected by four major highways: Murree Highway, Islamabad Highway, Soan Highway, and Capital Highway. However, to date, only two highways have been constructed: Srinagar Highway (formerly known as Murree Highway and later as Kashmir Highway) and Islamabad Highway. Plans of constructing Margalla Avenue are also underway. Islamabad is the hub all the governmental activities while Rawalpindi is the centre of all industrial, commercial, and military activities. The two cities are considered sister cities and are highly interdependent. Islamabad is a net contributor to the Pakistani economy, as whilst having only 0.8% of the country's population, it contributes 1% to the country's GDP. Islamabad Stock Exchange, founded in 1989, is Pakistan's third largest stock exchange after Karachi Stock Exchange and Lahore Stock Exchange. The exchange has 118 members with 104 corporate bodies and 18 individual members. The average daily turnover of the stock exchange is over 1 million shares. As of 2012, Islamabad LTU (Large Tax Unit) was responsible for Rs 371 billion in tax revenue, which amounts to 20% of all the revenue collected by Federal Board of Revenue. Islamabad has seen an expansion in information and communications technology with the addition two Software Technology Parks, which house numerous national and foreign technological and information technology companies. The tech parks are located in Evacuee Trust Complex and Awami Markaz. Awami Markaz houses 36 IT companies while Evacuee Trust house 29 companies. Call centres for foreign companies have been targeted as another significant area of growth, with the government making efforts to reduce taxes by as much as 10% to encourage foreign investments in the information technology sector. Most of Pakistan's state-owned companies like PIA, PTV, PTCL, OGDCL, and Zarai Taraqiati Bank Ltd. are based in Islamabad. Headquarters of all major telecommunication operators such as PTCL, Mobilink, Telenor, Ufone, and China Mobile are located in Islamabad. Being an expensive city, the prices of most of fruits, vegetable and poultry items increased in Islamabad during the year 2015-2020 Islamabad is connected to major destinations around the world through the Islamabad International Airport. The airport is the largest in Pakistan, handling 9 million passengers per annum. The airport was built at a cost of $400 million and opened on 3 May 2018, replacing the former Benazir Bhutto International Airport. It is the first greenfield airport in Pakistan with an area of 3,600-acre (15 km). The Rawalpindi-Islamabad Metrobus is a 24 km (14.9 mi) bus rapid transit system that serves the twin cities of Rawalpindi and Islamabad in Pakistan. It uses dedicated bus lanes for all of its route covering 24 bus stations. Islamabad is well connected with other parts of the country through car rental services such as Alvi Transport Network and Pakistan Car Rentals. All major cities and towns are accessible through regular trains and bus services running mostly from the neighbouring city of Rawalpindi. Lahore and Peshawar are linked to Islamabad through a network of motorways, which has significantly reduced travelling times between these cities. M-2 Motorway is 367 km (228 mi) long and connect Islamabad and Lahore. M-1 Motorway connects Islamabad with Peshawar and is 155 km (96 mi) long. Islamabad is linked to Rawalpindi through the Faizabad Interchange, which has a daily traffic volume of about 48,000 vehicles. Islamabad has the highest literacy rate of Pakistan at 95%. Islamabad also has some of Pakistan's major universities, including Quaid-i-Azam University, the International Islamic University, and the National University of Sciences and Technology and Pakistan Institute of Engineering and Applied Sciences Private School Network Islamabad is working for private educational institutions. The president of PSN is Dr. Muhammad Afzal Babur from Bhara Kahu. PSN is divided into eight zones in Islamabad. In Tarlai Zone Chaudhary Faisal Ali from Faisal Academy Tarlai Kalan is Zonal General Sectary of PSN. Quaid-e-Azam University has several faculties. The institute is located in a semi-hilly area, east of the Secretariat buildings and near the base of Margalla Hills. This Post-Graduate institute is spread over 1,705 acres (6.90 km). The nucleus of the campus has been designed as an axial spine with a library as its center. Other universities include the following: Islamabad United became the first ever team to win Pakistan Super League in 2016. And now the federal team Is participating in the Pakistan Cup. Islamabad travel guide from Wikivoyage
[ { "paragraph_id": 0, "text": "The Islamabad Capital Territory (ICT; Urdu: وفاقی دارالحکومت, romanized: Vafāqī Dār-alhakūmat) is the only federal territory of Pakistan containing Islamabad, the capital of Pakistan. Located on the northern edge of the Pothohar Plateau and at the foot of the Margalla Hills, The ICT shares borders with the province of Khyber Pakhtunkhwa in the west and with the province of Punjab in the remaining directions. It covers an area of 1,165 square kilometres (450 sq mi) and according to the 2023 national census, has a population of over 1 million in the city proper, while over 2 million in the whole territory. The territory is represented in the National Assembly by NA-52, NA-53, and NA-54 constituencies and by four seats in the Senate.", "title": "" }, { "paragraph_id": 1, "text": "In 1960, land was transferred from Rawalpindi District of Punjab province to replace Karachi Federal Capital Territory and establish Pakistan's new capital. According to the 1960s master plan, the Capital Territory included Rawalpindi, and was to be composed of the following parts:", "title": "History" }, { "paragraph_id": 2, "text": "However, Rawalpindi was eventually excluded from the Islamabad master plan in the 1980s.", "title": "History" }, { "paragraph_id": 3, "text": "Islamabad is subdivided into five zones:", "title": "Administration" }, { "paragraph_id": 4, "text": "Islamabad Capital Territory comprises urban and rural areas. The rural consists of 23 union councils, comprising 133 villages, while urban has 27 union councils.", "title": "Administration" }, { "paragraph_id": 5, "text": "The climate of Islamabad has a humid subtropical climate (Köppen: Cwa), with five seasons: winter (November–February), spring (March and April), summer (May and June), rainy monsoon (July and August), and autumn (September and October).", "title": "Climate" }, { "paragraph_id": 6, "text": "The temperatures range from 13 °C (55 °F) in January to 38 °C (100 °F) in June. The hottest month is June, where average highs routinely exceed 38 °C (100.4 °F), while the coolest month is January. The highest recorded temperature was 46.6 °C (115.9 °F) on 23 June 2005 while the lowest temperature was −6 °C (21.2 °F) on 17 January 1967. Winters generally feature dense fog in the mornings and sunny afternoons. In the city, temperatures stay mild, with snowfall over the higher elevations points on nearby hill stations, notably Murree and Nathia Gali.", "title": "Climate" }, { "paragraph_id": 7, "text": "The wettest month is July, with heavy rainfalls and evening thunderstorms with the possibility of cloudburst and flooding. Highest monthly rainfall of 743.3 millimetres (29.26 in) was recorded during July 1995. On 23 July 2001, Islamabad received a record breaking 620 millimetres (24 in) of rainfall in just 10 hours. It was the heaviest rainfall in Islamabad in the past 100 years and the highest rainfall in 24 hours as well. The city has also experienced snowfall on a number of occasions. Islamabad's micro-climate is regulated by three artificial reservoirs: Rawal, Simli, and Khanpur Dam. The latter is located on the Haro River near the town of Khanpur, about 40 kilometres (25 mi) from Islamabad. Simli Dam is 30 kilometres (19 mi) north of Islamabad.", "title": "Climate" }, { "paragraph_id": 8, "text": "Around 220 acres (89 ha) of the city consists of the Margalla Hills National Park, while the Loi Bher Forest is situated along the Islamabad Highway, covering an area of 1,087 acres (440 ha).", "title": "Climate" }, { "paragraph_id": 9, "text": "The main administrative authority of the city is Islamabad Capital Territory Administration with some help from Metropolitan Corporation Islamabad and Capital Development Authority (CDA), which oversees the planning, development, construction, and administration of the city. Islamabad Capital Territory is divided into eight zones: Administrative Zone, Commercial District, Educational Sector, Industrial Sector, Diplomatic Enclave, Residential Areas, Rural Areas and Green Area.", "title": "Cityscape" }, { "paragraph_id": 10, "text": "Islamabad city is divided into five major zones: Zone I, Zone II, Zone III, Zone IV, and Zone V. Out of these, Zone IV is the largest in area. All sectors of Ghouri Town (1, 2, 3, VIP, 5, 4-A, 4-B, 4-C, 5-A, 5-B and sector 7) are located in this zone. Zone I consists mainly of all the developed residential sectors, while Zone II consists of the under-developed residential sectors. Each residential sector is identified by a letter of the alphabet and a number, and covers an area of approximately 4 square kilometres. The sectors are lettered from A to I, and each sector is divided into four numbered sub-sectors.", "title": "Cityscape" }, { "paragraph_id": 11, "text": "Series A, B, and C are still underdeveloped. The D series has seven sectors (D-11 to D-17), of which only sector D-12 is completely developed. This series is located at the foot of Margalla Hills. The E Sectors are named from E-7 to E-17. Many foreigners and diplomatic personnel are housed in these sectors. In the revised Master Plan of the city, CDA has decided to develop a park on the pattern of Fatima Jinnah Park in sector E-14. Sectors E-8 and E-9 contain the campuses of Bahria University, Air University, and the National Defence University. The F and G series contains the most developed sectors. F series contains sectors F-5 to F-17; some sectors are still under-developed. F-5 is an important sector for the software industry in Islamabad, as the two software technology parks are located here. The entire F-9 sector is covered with Fatima Jinnah Park. The Centaurus complex will be one of the major landmarks of the F-8 sector. G sectors are numbered G-5 through G-17. Some important places include the Jinnah Convention Center and Serena Hotel in G-5, the Red Mosque in G-6, and the Pakistan Institute of Medical Sciences, the largest medical complex in the capital, located in G-8.", "title": "Cityscape" }, { "paragraph_id": 12, "text": "The H sectors are numbered H-8 through H-17. The H sectors are mostly dedicated to educational and health institutions. National University of Sciences and Technology covers a major portion of sector H-12. The I sectors are numbered from I-8 to I-18. With the exception of I-8, which is a well-developed residential area, these sectors are primarily part of the industrial zone. Currently two sub-sectors of I-9 and one sub-sector of I-10 are used as industrial areas. CDA is planning to set up Islamabad Railway Station in Sector I-18 and Industrial City in sector I-17. Zone III consists primarily of the Margalla Hills and Margalla Hills National Park. Rawal Lake is in this zone. Zone IV and V consist of Islamabad Park, and rural areas of the city. The Soan River flows into the city through Zone V.", "title": "Cityscape" }, { "paragraph_id": 13, "text": "While urban Islamabad is home to people from all over Pakistan as well as expatriates, in the rural areas a number of Pothohari speaking tribal communities can still be recognised.", "title": "Demographics" }, { "paragraph_id": 14, "text": "Religion in Islamabad Territory, Pakistan", "title": "Demographics" }, { "paragraph_id": 15, "text": "When the master plan for Islamabad was drawn up in 1960, Islamabad and Rawalpindi, along with the adjoining areas, was to be integrated to form a large metropolitan area called Islamabad/Rawalpindi Metropolitan Area. The area would consist of the developing Islamabad, the old colonial cantonment city of Rawalpindi, and Margalla Hills National Park, including surrounding rural areas. However, Islamabad city is part of the Islamabad Capital Territory, while Rawalpindi is part of Rawalpindi District, which is part of province of Punjab.", "title": "Islamabad-Rawalpindi metropolitan area" }, { "paragraph_id": 16, "text": "Initially, it was proposed that the three areas would be connected by four major highways: Murree Highway, Islamabad Highway, Soan Highway, and Capital Highway. However, to date, only two highways have been constructed: Srinagar Highway (formerly known as Murree Highway and later as Kashmir Highway) and Islamabad Highway. Plans of constructing Margalla Avenue are also underway. Islamabad is the hub all the governmental activities while Rawalpindi is the centre of all industrial, commercial, and military activities. The two cities are considered sister cities and are highly interdependent.", "title": "Islamabad-Rawalpindi metropolitan area" }, { "paragraph_id": 17, "text": "Islamabad is a net contributor to the Pakistani economy, as whilst having only 0.8% of the country's population, it contributes 1% to the country's GDP. Islamabad Stock Exchange, founded in 1989, is Pakistan's third largest stock exchange after Karachi Stock Exchange and Lahore Stock Exchange. The exchange has 118 members with 104 corporate bodies and 18 individual members. The average daily turnover of the stock exchange is over 1 million shares. As of 2012, Islamabad LTU (Large Tax Unit) was responsible for Rs 371 billion in tax revenue, which amounts to 20% of all the revenue collected by Federal Board of Revenue.", "title": "Economy" }, { "paragraph_id": 18, "text": "Islamabad has seen an expansion in information and communications technology with the addition two Software Technology Parks, which house numerous national and foreign technological and information technology companies. The tech parks are located in Evacuee Trust Complex and Awami Markaz. Awami Markaz houses 36 IT companies while Evacuee Trust house 29 companies. Call centres for foreign companies have been targeted as another significant area of growth, with the government making efforts to reduce taxes by as much as 10% to encourage foreign investments in the information technology sector. Most of Pakistan's state-owned companies like PIA, PTV, PTCL, OGDCL, and Zarai Taraqiati Bank Ltd. are based in Islamabad. Headquarters of all major telecommunication operators such as PTCL, Mobilink, Telenor, Ufone, and China Mobile are located in Islamabad. Being an expensive city, the prices of most of fruits, vegetable and poultry items increased in Islamabad during the year 2015-2020", "title": "Economy" }, { "paragraph_id": 19, "text": "Islamabad is connected to major destinations around the world through the Islamabad International Airport. The airport is the largest in Pakistan, handling 9 million passengers per annum. The airport was built at a cost of $400 million and opened on 3 May 2018, replacing the former Benazir Bhutto International Airport. It is the first greenfield airport in Pakistan with an area of 3,600-acre (15 km).", "title": "Transport" }, { "paragraph_id": 20, "text": "The Rawalpindi-Islamabad Metrobus is a 24 km (14.9 mi) bus rapid transit system that serves the twin cities of Rawalpindi and Islamabad in Pakistan. It uses dedicated bus lanes for all of its route covering 24 bus stations. Islamabad is well connected with other parts of the country through car rental services such as Alvi Transport Network and Pakistan Car Rentals.", "title": "Transport" }, { "paragraph_id": 21, "text": "All major cities and towns are accessible through regular trains and bus services running mostly from the neighbouring city of Rawalpindi. Lahore and Peshawar are linked to Islamabad through a network of motorways, which has significantly reduced travelling times between these cities. M-2 Motorway is 367 km (228 mi) long and connect Islamabad and Lahore. M-1 Motorway connects Islamabad with Peshawar and is 155 km (96 mi) long. Islamabad is linked to Rawalpindi through the Faizabad Interchange, which has a daily traffic volume of about 48,000 vehicles.", "title": "Transport" }, { "paragraph_id": 22, "text": "Islamabad has the highest literacy rate of Pakistan at 95%. Islamabad also has some of Pakistan's major universities, including Quaid-i-Azam University, the International Islamic University, and the National University of Sciences and Technology and Pakistan Institute of Engineering and Applied Sciences", "title": "Education" }, { "paragraph_id": 23, "text": "Private School Network Islamabad is working for private educational institutions. The president of PSN is Dr. Muhammad Afzal Babur from Bhara Kahu. PSN is divided into eight zones in Islamabad. In Tarlai Zone Chaudhary Faisal Ali from Faisal Academy Tarlai Kalan is Zonal General Sectary of PSN.", "title": "Education" }, { "paragraph_id": 24, "text": "Quaid-e-Azam University has several faculties. The institute is located in a semi-hilly area, east of the Secretariat buildings and near the base of Margalla Hills. This Post-Graduate institute is spread over 1,705 acres (6.90 km). The nucleus of the campus has been designed as an axial spine with a library as its center. Other universities include the following:", "title": "Education" }, { "paragraph_id": 25, "text": "Islamabad United became the first ever team to win Pakistan Super League in 2016. And now the federal team Is participating in the Pakistan Cup.", "title": "Sports" }, { "paragraph_id": 26, "text": "Islamabad travel guide from Wikivoyage", "title": "External links" } ]
The Islamabad Capital Territory is the only federal territory of Pakistan containing Islamabad, the capital of Pakistan. Located on the northern edge of the Pothohar Plateau and at the foot of the Margalla Hills, The ICT shares borders with the province of Khyber Pakhtunkhwa in the west and with the province of Punjab in the remaining directions. It covers an area of 1,165 square kilometres and according to the 2023 national census, has a population of over 1 million in the city proper, while over 2 million in the whole territory. The territory is represented in the National Assembly by NA-52, NA-53, and NA-54 constituencies and by four seats in the Senate.
2001-11-29T14:11:47Z
2023-12-27T06:59:16Z
[ "Template:Infobox settlement", "Template:Main", "Template:Further", "Template:Cn", "Template:About", "Template:See also", "Template:Bar box", "Template:Clear", "Template:Webarchive", "Template:Curlie", "Template:Authority control", "Template:Lang-ur", "Template:Reflist", "Template:Cite book", "Template:Use dmy dates", "Template:Convert", "Template:Sisterlinks", "Template:Wikivoyage-inline", "Template:Portal", "Template:Neighbourhoods of Islamabad", "Template:Administrative units of Pakistan", "Template:Islamabad weatherbox", "Template:Short description", "Template:Use Pakistani English", "Template:Multiple image", "Template:Islamabad", "Template:Pie chart", "Template:Cite web", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Islamabad_Capital_Territory
15,295
Intelligent design
Intelligent design (ID) is a pseudoscientific argument for the existence of God, presented by its proponents as "an evidence-based scientific theory about life's origins". Proponents claim that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." ID is a form of creationism that lacks empirical support and offers no testable or tenable hypotheses, and is therefore not science. The leading proponents of ID are associated with the Discovery Institute, a Christian, politically conservative think tank based in the United States. Although the phrase intelligent design had featured previously in theological discussions of the argument from design, its first publication in its present use as an alternative term for creationism was in Of Pandas and People, a 1989 creationist textbook intended for high school biology classes. The term was substituted into drafts of the book, directly replacing references to creation science and creationism, after the 1987 Supreme Court's Edwards v. Aguillard decision barred the teaching of creation science in public schools on constitutional grounds. From the mid-1990s, the intelligent design movement (IDM), supported by the Discovery Institute, advocated inclusion of intelligent design in public school biology curricula. This led to the 2005 Kitzmiller v. Dover Area School District trial, which found that intelligent design was not science, that it "cannot uncouple itself from its creationist, and thus religious, antecedents", and that the public school district's promotion of it therefore violated the Establishment Clause of the First Amendment to the United States Constitution. ID presents two main arguments against evolutionary explanations: irreducible complexity and specified complexity, asserting that certain biological and informational features of living things are too complex to be the result of natural selection. Detailed scientific examination has rebutted several examples for which evolutionary explanations are claimed to be impossible. ID seeks to challenge the methodological naturalism inherent in modern science, though proponents concede that they have yet to produce a scientific theory. As a positive argument against evolution, ID proposes an analogy between natural systems and human artifacts, a version of the theological argument from design for the existence of God. ID proponents then conclude by analogy that the complex features, as defined by ID, are evidence of design. Critics of ID find a false dichotomy in the premise that evidence against evolution constitutes evidence for design. In 1910, evolution was not a topic of major religious controversy in America, but in the 1920s, the fundamentalist–modernist controversy in theology resulted in fundamentalist Christian opposition to teaching evolution and resulted in the origins of modern creationism. As a result, teaching of evolution was effectively suspended in U.S. public schools until the 1960s, and when evolution was then reintroduced into the curriculum, there was a series of court cases in which attempts were made to get creationism taught alongside evolution in science classes. Young Earth creationists (YECs) promoted "creation science" as "an alternative scientific explanation of the world in which we live". This frequently invoked the argument from design to explain complexity in nature as supposedly demonstrating the existence of God. The argument from design, also known as the teleological argument or "argument from intelligent design", has been presented by theologists for centuries. A sufficiently succinct summary of the argument from design shows its unscientific, circular, and thereby illogical reasoning, for example as follows: "Wherever complex design exists, there must have been a designer; nature is complex and therefore nature must have had an intelligent designer." Thomas Aquinas presented it in his fifth proof of God's existence as a syllogism. In 1802, William Paley's Natural Theology presented examples of intricate purpose in organisms. His version of the watchmaker analogy argued that a watch has evidently been designed by a craftsman and that it is supposedly just as evident that the complexity and adaptation seen in nature must have been designed. He went on to argue that the perfection and diversity of these designs supposedly shows the designer to be omnipotent and that this can supposedly only be the Christian god. Like "creation science", intelligent design centers on Paley's religious argument from design, but while Paley's natural theology was open to deistic design through God-given laws, intelligent design seeks scientific confirmation of repeated supposedly miraculous interventions in the history of life. "Creation science" prefigured the intelligent design arguments of irreducible complexity, even featuring the bacterial flagellum. In the United States, attempts to introduce "creation science" into schools led to court rulings that it is religious in nature and thus cannot be taught in public school science classrooms. Intelligent design is also presented as science and shares other arguments with "creation science" but avoids literal Biblical references to such topics as the biblical flood story or using Bible verses to estimate the age of the Earth. Barbara Forrest writes that the intelligent design movement began in 1984 with the book The Mystery of Life's Origin: Reassessing Current Theories, co-written by the creationist and chemist Charles B. Thaxton and two other authors and published by Jon A. Buell's Foundation for Thought and Ethics. In March 1986, Stephen C. Meyer published a review of this book, discussing how information theory could suggest that messages transmitted by DNA in the cell show "specified complexity" and must have been created by an intelligent agent. He also argued that science is based upon "foundational assumptions" of naturalism that were as much a matter of faith as those of "creation theory". In November of that year, Thaxton described his reasoning as a more sophisticated form of Paley's argument from design. At a conference that Thaxton held in 1988 ("Sources of Information Content in DNA"), he said that his intelligent cause view was compatible with both metaphysical naturalism and supernaturalism. Intelligent design avoids identifying or naming the intelligent designer—it merely states that one (or more) must exist—but leaders of the movement have said the designer is the Christian God. Whether this lack of specificity about the designer's identity in public discussions is a genuine feature of the concept – or just a posture taken to avoid alienating those who would separate religion from the teaching of science – has been a matter of great debate between supporters and critics of intelligent design. The Kitzmiller v. Dover Area School District court ruling held the latter to be the case. Since the Middle Ages, discussion of the religious "argument from design" or "teleological argument" in theology, with its concept of "intelligent design", has persistently referred to the theistic Creator God. Although ID proponents chose this provocative label for their proposed alternative to evolutionary explanations, they have de-emphasized their religious antecedents and denied that ID is natural theology, while still presenting ID as supporting the argument for the existence of God. While intelligent design proponents have pointed out past examples of the phrase intelligent design that they said were not creationist and faith-based, they have failed to show that these usages had any influence on those who introduced the label in the intelligent design movement. Variations on the phrase appeared in Young Earth creationist publications: a 1967 book co-written by Percival Davis referred to "design according to which basic organisms were created". In 1970, A. E. Wilder-Smith published The Creation of Life: A Cybernetic Approach to Evolution. The book defended Paley's design argument with computer calculations of the improbability of genetic sequences, which he said could not be explained by evolution but required "the abhorred necessity of divine intelligent activity behind nature", and that "the same problem would be expected to beset the relationship between the designer behind nature and the intelligently designed part of nature known as man." In a 1984 article as well as in his affidavit to Edwards v. Aguillard, Dean H. Kenyon defended creation science by stating that "biomolecular systems require intelligent design and engineering know-how", citing Wilder-Smith. Creationist Richard B. Bliss used the phrase "creative design" in Origins: Two Models: Evolution, Creation (1976), and in Origins: Creation or Evolution (1988) wrote that "while evolutionists are trying to find non-intelligent ways for life to occur, the creationist insists that an intelligent design must have been there in the first place." The first systematic use of the term, defined in a glossary and claimed to be other than creationism, was in Of Pandas and People, co-authored by Davis and Kenyon. The most common modern use of the words "intelligent design" as a term intended to describe a field of inquiry began after the United States Supreme Court ruled in June 1987 in the case of Edwards v. Aguillard that it is unconstitutional for a state to require the teaching of creationism in public school science curricula. A Discovery Institute report says that Charles B. Thaxton, editor of Pandas, had picked the phrase up from a NASA scientist, and thought, "That's just what I need, it's a good engineering term." In two successive 1987 drafts of the book, over one hundred uses of the root word "creation", such as "creationism" and "Creation Science", were changed, almost without exception, to "intelligent design", while "creationists" was changed to "design proponents" or, in one instance, "cdesign proponentsists" [sic]. In June 1988, Thaxton held a conference titled "Sources of Information Content in DNA" in Tacoma, Washington. Stephen C. Meyer was at the conference, and later recalled that "The term intelligent design came up..." In December 1988 Thaxton decided to use the label "intelligent design" for his new creationist movement. Of Pandas and People was published in 1989, and in addition to including all the current arguments for ID, was the first book to make systematic use of the terms "intelligent design" and "design proponents" as well as the phrase "design theory", defining the term intelligent design in a glossary and representing it as not being creationism. It thus represents the start of the modern intelligent design movement. "Intelligent design" was the most prominent of around fifteen new terms it introduced as a new lexicon of creationist terminology to oppose evolution without using religious language. It was the first place where the phrase "intelligent design" appeared in its primary present use, as stated both by its publisher Jon A. Buell, and by William A. Dembski in his expert witness report for Kitzmiller v. Dover Area School District. The National Center for Science Education (NCSE) has criticized the book for presenting all of the basic arguments of intelligent design proponents and being actively promoted for use in public schools before any research had been done to support these arguments. Although presented as a scientific textbook, philosopher of science Michael Ruse considers the contents "worthless and dishonest". An American Civil Liberties Union lawyer described it as a political tool aimed at students who did not "know science or understand the controversy over evolution and creationism". One of the authors of the science framework used by California schools, Kevin Padian, condemned it for its "sub-text", "intolerance for honest science" and "incompetence". The term "irreducible complexity" was introduced by biochemist Michael Behe in his 1996 book Darwin's Black Box, though he had already described the concept in his contributions to the 1993 revised edition of Of Pandas and People. Behe defines it as "a single system which is composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning". Behe uses the analogy of a mousetrap to illustrate this concept. A mousetrap consists of several interacting pieces—the base, the catch, the spring and the hammer—all of which must be in place for the mousetrap to work. Removal of any one piece destroys the function of the mousetrap. Intelligent design advocates assert that natural selection could not create irreducibly complex systems, because the selectable function is present only when all parts are assembled. Behe argued that irreducibly complex biological mechanisms include the bacterial flagellum of E. coli, the blood clotting cascade, cilia, and the adaptive immune system. Critics point out that the irreducible complexity argument assumes that the necessary parts of a system have always been necessary and therefore could not have been added sequentially. They argue that something that is at first merely advantageous can later become necessary as other components change. Furthermore, they argue, evolution often proceeds by altering preexisting parts or by removing them from a system, rather than by adding them. This is sometimes called the "scaffolding objection" by an analogy with scaffolding, which can support an "irreducibly complex" building until it is complete and able to stand on its own. Behe has acknowledged using "sloppy prose", and that his "argument against Darwinism does not add up to a logical proof." Irreducible complexity has remained a popular argument among advocates of intelligent design; in the Dover trial, the court held that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large." In 1986, Charles B. Thaxton, a physical chemist and creationist, used the term "specified complexity" from information theory when claiming that messages transmitted by DNA in the cell were specified by intelligence, and must have originated with an intelligent agent. The intelligent design concept of "specified complexity" was developed in the 1990s by mathematician, philosopher, and theologian William A. Dembski. Dembski states that when something exhibits specified complexity (i.e., is both complex and "specified", simultaneously), one can infer that it was produced by an intelligent cause (i.e., that it was designed) rather than being the result of natural processes. He provides the following examples: "A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified." He states that details of living things can be similarly characterized, especially the "patterns" of molecular sequences in functional biological molecules such as DNA. Dembski defines complex specified information (CSI) as anything with a less than 1 in 10 chance of occurring by (natural) chance. Critics say that this renders the argument a tautology: complex specified information cannot occur naturally because Dembski has defined it thus, so the real question becomes whether or not CSI actually exists in nature. The conceptual soundness of Dembski's specified complexity/CSI argument has been discredited in the scientific and mathematical communities. Specified complexity has yet to be shown to have wide applications in other fields, as Dembski asserts. John Wilkins and Wesley R. Elsberry characterize Dembski's "explanatory filter" as eliminative because it eliminates explanations sequentially: first regularity, then chance, finally defaulting to design. They argue that this procedure is flawed as a model for scientific inference because the asymmetric way it treats the different possible explanations renders it prone to making false conclusions. Richard Dawkins, another critic of intelligent design, argues in The God Delusion (2006) that allowing for an intelligent designer to account for unlikely complexity only postpones the problem, as such a designer would need to be at least as complex. Other scientists have argued that evolution through selection is better able to explain the observed complexity, as is evident from the use of selective evolution to design certain electronic, aeronautic and automotive systems that are considered problems too complex for human "intelligent designers". Intelligent design proponents have also occasionally appealed to broader teleological arguments outside of biology, most notably an argument based on the fine-tuning of universal constants that make matter and life possible and that are argued not to be solely attributable to chance. These include the values of fundamental physical constants, the relative strength of nuclear forces, electromagnetism, and gravity between fundamental particles, as well as the ratios of masses of such particles. Intelligent design proponent and Center for Science and Culture fellow Guillermo Gonzalez argues that if any of these values were even slightly different, the universe would be dramatically different, making it impossible for many chemical elements and features of the Universe, such as galaxies, to form. Thus, proponents argue, an intelligent designer of life was needed to ensure that the requisite features were present to achieve that particular outcome. Scientists have generally responded that these arguments are poorly supported by existing evidence. Victor J. Stenger and other critics say both intelligent design and the weak form of the anthropic principle are essentially a tautology; in his view, these arguments amount to the claim that life is able to exist because the Universe is able to support life. The claim of the improbability of a life-supporting universe has also been criticized as an argument by lack of imagination for assuming no other forms of life are possible. Life as we know it might not exist if things were different, but a different sort of life might exist in its place. A number of critics also suggest that many of the stated variables appear to be interconnected and that calculations made by mathematicians and physicists suggest that the emergence of a universe similar to ours is quite probable. The contemporary intelligent design movement formulates its arguments in secular terms and intentionally avoids identifying the intelligent agent (or agents) they posit. Although they do not state that God is the designer, the designer is often implicitly hypothesized to have intervened in a way that only a god could intervene. Dembski, in The Design Inference (1998), speculates that an alien culture could fulfill these requirements. Of Pandas and People proposes that SETI illustrates an appeal to intelligent design in science. In 2000, philosopher of science Robert T. Pennock suggested the Raëlian UFO religion as a real-life example of an extraterrestrial intelligent designer view that "make[s] many of the same bad arguments against evolutionary theory as creationists". The authoritative description of intelligent design, however, explicitly states that the Universe displays features of having been designed. Acknowledging the paradox, Dembski concludes that "no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life." The leading proponents have made statements to their supporters that they believe the designer to be the Christian God, to the exclusion of all other religions. Beyond the debate over whether intelligent design is scientific, a number of critics argue that existing evidence makes the design hypothesis appear unlikely, irrespective of its status in the world of science. For example, Jerry Coyne asks why a designer would "give us a pathway for making vitamin C, but then destroy it by disabling one of its enzymes" (see pseudogene) and why a designer would not "stock oceanic islands with reptiles, mammals, amphibians, and freshwater fish, despite the suitability of such islands for these species". Coyne also points to the fact that "the flora and fauna on those islands resemble that of the nearest mainland, even when the environments are very different" as evidence that species were not placed there by a designer. Previously, in Darwin's Black Box, Behe had argued that we are simply incapable of understanding the designer's motives, so such questions cannot be answered definitively. Odd designs could, for example, "...have been placed there by the designer for a reason—for artistic reasons, for variety, to show off, for some as-yet-undetected practical purpose, or for some unguessable reason—or they might not." Coyne responds that in light of the evidence, "either life resulted not from intelligent design, but from evolution; or the intelligent designer is a cosmic prankster who designed everything to make it look as though it had evolved." Intelligent design proponents such as Paul Nelson avoid the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design. Behe cites Paley as his inspiration, but he differs from Paley's expectation of a perfect Creation and proposes that designers do not necessarily produce the best design they can. Behe suggests that, like a parent not wanting to spoil a child with extravagant toys, the designer can have multiple motives for not giving priority to excellence in engineering. He says that "Another problem with the argument from imperfection is that it critically depends on a psychoanalysis of the unidentified designer. Yet the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are." This reliance on inexplicable motives of the designer makes intelligent design scientifically untestable. Retired UC Berkeley law professor, author and intelligent design advocate Phillip E. Johnson puts forward a core definition that the designer creates for a purpose, giving the example that in his view AIDS was created to punish immorality and is not caused by HIV, but such motives cannot be tested by scientific methods. Asserting the need for a designer of complexity also raises the question "What designed the designer?" Intelligent design proponents say that the question is irrelevant to or outside the scope of intelligent design. Richard Wein counters that "...scientific explanations often create new unanswered questions. But, in assessing the value of an explanation, these questions are not irrelevant. They must be balanced against the improvements in our understanding which the explanation provides. Invoking an unexplained being to explain the origin of other beings (ourselves) is little more than question-begging. The new question raised by the explanation is as problematic as the question which the explanation purports to answer." Richard Dawkins sees the assertion that the designer does not need to be explained as a thought-terminating cliché. In the absence of observable, measurable evidence, the question "What designed the designer?" leads to an infinite regression from which intelligent design proponents can only escape by resorting to religious creationism or logical contradiction. The intelligent design movement is a direct outgrowth of the creationism of the 1980s. The scientific and academic communities, along with a U.S. federal court, view intelligent design as either a form of creationism or as a direct descendant that is closely intertwined with traditional creationism; and several authors explicitly refer to it as "intelligent design creationism". The movement is headquartered in the Center for Science and Culture, established in 1996 as the creationist wing of the Discovery Institute to promote a religious agenda calling for broad social, academic and political changes. The Discovery Institute's intelligent design campaigns have been staged primarily in the United States, although efforts have been made in other countries to promote intelligent design. Leaders of the movement say intelligent design exposes the limitations of scientific orthodoxy and of the secular philosophy of naturalism. Intelligent design proponents allege that science should not be limited to naturalism and should not demand the adoption of a naturalistic philosophy that dismisses out-of-hand any explanation that includes a supernatural cause. The overall goal of the movement is to "reverse the stifling dominance of the materialist worldview" represented by the theory of evolution in favor of "a science consonant with Christian and theistic convictions". Phillip E. Johnson stated that the goal of intelligent design is to cast creationism as a scientific concept. All leading intelligent design proponents are fellows or staff of the Discovery Institute and its Center for Science and Culture. Nearly all intelligent design concepts and the associated movement are the products of the Discovery Institute, which guides the movement and follows its wedge strategy while conducting its "teach the controversy" campaign and their other related programs. Leading intelligent design proponents have made conflicting statements regarding intelligent design. In statements directed at the general public, they say intelligent design is not religious; when addressing conservative Christian supporters, they state that intelligent design has its foundation in the Bible. Recognizing the need for support, the Institute affirms its Christian, evangelistic orientation: Alongside a focus on influential opinion-makers, we also seek to build up a popular base of support among our natural constituency, namely, Christians. We will do this primarily through apologetics seminars. We intend these to encourage and equip believers with new scientific evidences that support the faith, as well as to "popularize" our ideas in the broader culture. Barbara Forrest, an expert who has written extensively on the movement, describes this as being due to the Discovery Institute's obfuscating its agenda as a matter of policy. She has written that the movement's "activities betray an aggressive, systematic agenda for promoting not only intelligent design creationism, but the religious worldview that undergirds it." Although arguments for intelligent design by the intelligent design movement are formulated in secular terms and intentionally avoid positing the identity of the designer, the majority of principal intelligent design advocates are publicly religious Christians who have stated that, in their view, the designer proposed in intelligent design is the Christian conception of God. Stuart Burgess, Phillip E. Johnson, William A. Dembski, and Stephen C. Meyer are evangelical Protestants; Michael Behe is a Roman Catholic; Paul Nelson supports young Earth creationism; and Jonathan Wells is a member of the Unification Church. Non-Christian proponents include David Klinghoffer, who is Jewish, Michael Denton and David Berlinski, who are agnostic, and Muzaffar Iqbal, a Pakistani-Canadian Muslim. Phillip E. Johnson has stated that cultivating ambiguity by employing secular language in arguments that are carefully crafted to avoid overtones of theistic creationism is a necessary first step for ultimately reintroducing the Christian concept of God as the designer. Johnson explicitly calls for intelligent design proponents to obfuscate their religious motivations so as to avoid having intelligent design identified "as just another way of packaging the Christian evangelical message." Johnson emphasizes that "...the first thing that has to be done is to get the Bible out of the discussion. ...This is not to say that the biblical issues are unimportant; the point is rather that the time to address them will be after we have separated materialist prejudice from scientific fact." The strategy of deliberately disguising the religious intent of intelligent design has been described by William A. Dembski in The Design Inference. In this work, Dembski lists a god or an "alien life force" as two possible options for the identity of the designer; however, in his book Intelligent Design: The Bridge Between Science and Theology (1999), Dembski states: Christ is indispensable to any scientific theory, even if its practitioners don't have a clue about him. The pragmatics of a scientific theory can, to be sure, be pursued without recourse to Christ. But the conceptual soundness of the theory can in the end only be located in Christ. Dembski also stated, "ID is part of God's general revelation ... Not only does intelligent design rid us of this ideology [materialism], which suffocates the human spirit, but, in my personal experience, I've found that it opens the path for people to come to Christ." Both Johnson and Dembski cite the Bible's Gospel of John as the foundation of intelligent design. Barbara Forrest contends such statements reveal that leading proponents see intelligent design as essentially religious in nature, not merely a scientific concept that has implications with which their personal religious beliefs happen to coincide. She writes that the leading proponents of intelligent design are closely allied with the ultra-conservative Christian Reconstructionism movement. She lists connections of (current and former) Discovery Institute Fellows Phillip E. Johnson, Charles B. Thaxton, Michael Behe, Richard Weikart, Jonathan Wells and Francis J. Beckwith to leading Christian Reconstructionist organizations, and the extent of the funding provided the Institute by Howard Ahmanson, Jr., a leading figure in the Reconstructionist movement. Not all creationist organizations have embraced the intelligent design movement. According to Thomas Dixon, "Religious leaders have come out against ID too. An open letter affirming the compatibility of Christian faith and the teaching of evolution, first produced in response to controversies in Wisconsin in 2004, has now been signed by over ten thousand clergy from different Christian denominations across America." Hugh Ross of Reasons to Believe, a proponent of Old Earth creationism, believes that the efforts of intelligent design proponents to divorce the concept from Biblical Christianity make its hypothesis too vague. In 2002, he wrote: "Winning the argument for design without identifying the designer yields, at best, a sketchy origins model. Such a model makes little if any positive impact on the community of scientists and other scholars. ... the time is right for a direct approach, a single leap into the origins fray. Introducing a biblically based, scientifically verifiable creation model represents such a leap." Likewise, two of the most prominent YEC organizations in the world have attempted to distinguish their views from those of the intelligent design movement. Henry M. Morris of the Institute for Creation Research (ICR) wrote, in 1999, that ID, "even if well-meaning and effectively articulated, will not work! It has often been tried in the past and has failed, and it will fail today. The reason it won't work is because it is not the Biblical method." According to Morris: "The evidence of intelligent design ... must be either followed by or accompanied by a sound presentation of true Biblical creationism if it is to be meaningful and lasting." In 2002, Carl Wieland, then of Answers in Genesis (AiG), criticized design advocates who, though well-intentioned, "'left the Bible out of it'" and thereby unwittingly aided and abetted the modern rejection of the Bible. Wieland explained that "AiG's major 'strategy' is to boldly, but humbly, call the church back to its Biblical foundations ... [so] we neither count ourselves a part of this movement nor campaign against it." The unequivocal consensus in the scientific community is that intelligent design is not science and has no place in a science curriculum. The U.S. National Academy of Sciences has stated that "creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science." The U.S. National Science Teachers Association and the American Association for the Advancement of Science have termed it pseudoscience. Others in the scientific community have denounced its tactics, accusing the ID movement of manufacturing false attacks against evolution, of engaging in misinformation and misrepresentation about science, and marginalizing those who teach it. More recently, in September 2012, Bill Nye warned that creationist views threaten science education and innovations in the United States. In 2001, the Discovery Institute published advertisements under the heading "A Scientific Dissent From Darwinism", with the claim that listed scientists had signed this statement expressing skepticism: We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged. The ambiguous statement did not exclude other known evolutionary mechanisms, and most signatories were not scientists in relevant fields, but starting in 2004 the Institute claimed the increasing number of signatures indicated mounting doubts about evolution among scientists. The statement formed a key component of Discovery Institute campaigns to present intelligent design as scientifically valid by claiming that evolution lacks broad scientific support, with Institute members continuing to cite the list through at least 2011. As part of a strategy to counter these claims, scientists organised Project Steve, which gained more signatories named Steve (or variants) than the Institute's petition, and a counter-petition, "A Scientific Support for Darwinism", which quickly gained similar numbers of signatories. Several surveys were conducted prior to the December 2005 decision in Kitzmiller v. Dover School District, which sought to determine the level of support for intelligent design among certain groups. According to a 2005 Harris poll, 10% of adults in the United States viewed human beings as "so complex that they required a powerful force or intelligent being to help create them." Although Zogby polls commissioned by the Discovery Institute show more support, these polls suffer from considerable flaws, such as having a low response rate (248 out of 16,000), being conducted on behalf of an organization with an expressed interest in the outcome of the poll, and containing leading questions. The 2017 Gallup creationism survey found that 38% of adults in the United States hold the view that "God created humans in their present form at one time within the last 10,000 years" when asked for their views on the origin and development of human beings, which was noted as being at the lowest level in 35 years. Previously, a series of Gallup polls in the United States from 1982 through 2014 on "Evolution, Creationism, Intelligent Design" found support for "human beings have developed over millions of years from less advanced formed of life, but God guided the process" of between 31% and 40%, support for "God created human beings in pretty much their present form at one time within the last 10,000 years or so" varied from 40% to 47%, and support for "human beings have developed over millions of years from less advanced forms of life, but God had no part in the process" varied from 9% to 19%. The polls also noted answers to a series of more detailed questions. There have been allegations that ID proponents have met discrimination, such as being refused tenure or being harshly criticized on the Internet. In the documentary film Expelled: No Intelligence Allowed, released in 2008, host Ben Stein presents five such cases. The film contends that the mainstream science establishment, in a "scientific conspiracy to keep God out of the nation's laboratories and classrooms", suppresses academics who believe they see evidence of intelligent design in nature or criticize evidence of evolution. Investigation into these allegations turned up alternative explanations for perceived persecution. The film portrays intelligent design as motivated by science, rather than religion, though it does not give a detailed definition of the phrase or attempt to explain it on a scientific level. Other than briefly addressing issues of irreducible complexity, Expelled examines it as a political issue. The scientific theory of evolution is portrayed by the film as contributing to fascism, the Holocaust, communism, atheism, and eugenics. Expelled has been used in private screenings to legislators as part of the Discovery Institute intelligent design campaign for Academic Freedom bills. Review screenings were restricted to churches and Christian groups, and at a special pre-release showing, one of the interviewees, PZ Myers, was refused admission. The American Association for the Advancement of Science describes the film as dishonest and divisive propaganda aimed at introducing religious ideas into public school science classrooms, and the Anti-Defamation League has denounced the film's allegation that evolutionary theory influenced the Holocaust. The film includes interviews with scientists and academics who were misled into taking part by misrepresentation of the topic and title of the film. Skeptic Michael Shermer describes his experience of being repeatedly asked the same question without context as "surreal". Advocates of intelligent design seek to keep God and the Bible out of the discussion, and present intelligent design in the language of science as though it were a scientific hypothesis. For a theory to qualify as scientific, it is expected to be: For any theory, hypothesis, or conjecture to be considered scientific, it must meet most, and ideally all, of these criteria. The fewer criteria are met, the less scientific it is; if it meets only a few or none at all, then it cannot be treated as scientific in any meaningful sense of the word. Typical objections to defining intelligent design as science are that it lacks consistency, violates the principle of parsimony, is not scientifically useful, is not falsifiable, is not empirically testable, and is not correctable, dynamic, progressive, or provisional. Intelligent design proponents seek to change this fundamental basis of science by eliminating "methodological naturalism" from science and replacing it with what the leader of the intelligent design movement, Phillip E. Johnson, calls "theistic realism". Intelligent design proponents argue that naturalistic explanations fail to explain certain phenomena and that supernatural explanations provide a simple and intuitive explanation for the origins of life and the universe. Many intelligent design followers believe that "scientism" is itself a religion that promotes secularism and materialism in an attempt to erase theism from public life, and they view their work in the promotion of intelligent design as a way to return religion to a central role in education and other public spheres. It has been argued that methodological naturalism is not an assumption of science, but a result of science well done: the God explanation is the least parsimonious, so according to Occam's razor, it cannot be a scientific explanation. The failure to follow the procedures of scientific discourse and the failure to submit work to the scientific community that withstands scrutiny have weighed against intelligent design being accepted as valid science. The intelligent design movement has not published a properly peer-reviewed article supporting ID in a scientific journal, and has failed to publish supporting peer-reviewed research or data. The only article published in a peer-reviewed scientific journal that made a case for intelligent design was quickly withdrawn by the publisher for having circumvented the journal's peer-review standards. The Discovery Institute says that a number of intelligent design articles have been published in peer-reviewed journals, but critics, largely members of the scientific community, reject this claim and state intelligent design proponents have set up their own journals with peer review that lack impartiality and rigor, consisting entirely of intelligent design supporters. Further criticism stems from the fact that the phrase intelligent design makes use of an assumption of the quality of an observable intelligence, a concept that has no scientific consensus definition. The characteristics of intelligence are assumed by intelligent design proponents to be observable without specifying what the criteria for the measurement of intelligence should be. Critics say that the design detection methods proposed by intelligent design proponents are radically different from conventional design detection, undermining the key elements that make it possible as legitimate science. Intelligent design proponents, they say, are proposing both searching for a designer without knowing anything about that designer's abilities, parameters, or intentions (which scientists do know when searching for the results of human intelligence), as well as denying the distinction between natural/artificial design that allows scientists to compare complex designed artifacts against the background of the sorts of complexity found in nature. Among a significant proportion of the general public in the United States, the major concern is whether conventional evolutionary biology is compatible with belief in God and in the Bible, and how this issue is taught in schools. The Discovery Institute's "teach the controversy" campaign promotes intelligent design while attempting to discredit evolution in United States public high school science courses. The scientific community and science education organizations have replied that there is no scientific controversy regarding the validity of evolution and that the controversy exists solely in terms of religion and politics. Eugenie C. Scott, along with Glenn Branch and other critics, has argued that many points raised by intelligent design proponents are arguments from ignorance. In the argument from ignorance, a lack of evidence for one view is erroneously argued to constitute proof of the correctness of another view. Scott and Branch say that intelligent design is an argument from ignorance because it relies on a lack of knowledge for its conclusion: lacking a natural explanation for certain specific aspects of evolution, we assume intelligent cause. They contend most scientists would reply that the unexplained is not unexplainable, and that "we don't know yet" is a more appropriate response than invoking a cause outside science. Particularly, Michael Behe's demands for ever more detailed explanations of the historical evolution of molecular systems seem to assume a false dichotomy, where either evolution or design is the proper explanation, and any perceived failure of evolution becomes a victory for design. Scott and Branch also contend that the supposedly novel contributions proposed by intelligent design proponents have not served as the basis for any productive scientific research. In his conclusion to the Kitzmiller trial, Judge John E. Jones III wrote that "ID is at bottom premised upon a false dichotomy, namely, that to the extent evolutionary theory is discredited, ID is confirmed." This same argument had been put forward to support creation science at the McLean v. Arkansas (1982) trial, which found it was "contrived dualism", the false premise of a "two model approach". Behe's argument of irreducible complexity puts forward negative arguments against evolution but does not make any positive scientific case for intelligent design. It fails to allow for scientific explanations continuing to be found, as has been the case with several examples previously put forward as supposed cases of irreducible complexity. Intelligent design proponents often insist that their claims do not require a religious component. However, various philosophical and theological issues are naturally raised by the claims of intelligent design. Intelligent design proponents attempt to demonstrate scientifically that features such as irreducible complexity and specified complexity could not arise through natural processes, and therefore required repeated direct miraculous interventions by a Designer (often a Christian concept of God). They reject the possibility of a Designer who works merely through setting natural laws in motion at the outset, in contrast to theistic evolution (to which even Charles Darwin was open). Intelligent design is distinct because it asserts repeated miraculous interventions in addition to designed laws. This contrasts with other major religious traditions of a created world in which God's interactions and influences do not work in the same way as physical causes. The Roman Catholic tradition makes a careful distinction between ultimate metaphysical explanations and secondary, natural causes. The concept of direct miraculous intervention raises other potential theological implications. If such a Designer does not intervene to alleviate suffering even though capable of intervening for other reasons, some imply the designer is not omnibenevolent (see problem of evil and related theodicy). Further, repeated interventions imply that the original design was not perfect and final, and thus pose a problem for any who believe that the Creator's work had been both perfect and final. Intelligent design proponents seek to explain the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design (for example, proposing that vestigial organs have unknown purposes), or by proposing that designers do not necessarily produce the best design they can, and may have unknowable motives for their actions. In 2005, the director of the Vatican Observatory, the Jesuit astronomer George Coyne, set out theological reasons for accepting evolution in an August 2005 article in The Tablet, and said that "Intelligent design isn't science even though it pretends to be. If you want to teach it in schools, intelligent design should be taught when religion or cultural history is taught, not science." In 2006, he "condemned ID as a kind of 'crude creationism' which reduced God to a mere engineer." Critics state that the wedge strategy's "ultimate goal is to create a theocratic state". Intelligent design has also been characterized as a God-of-the-gaps argument, which has the following form: A God-of-the-gaps argument is the theological version of an argument from ignorance. A key feature of this type of argument is that it merely answers outstanding questions with explanations (often supernatural) that are unverifiable and ultimately themselves subject to unanswerable questions. Historians of science observe that the astronomy of the earliest civilizations, although astonishing and incorporating mathematical constructions far in excess of any practical value, proved to be misdirected and of little importance to the development of science because they failed to inquire more carefully into the mechanisms that drove the heavenly bodies across the sky. It was the Greek civilization that first practiced science, although not yet as a formally defined experimental science, but nevertheless an attempt to rationalize the world of natural experience without recourse to divine intervention. In this historically motivated definition of science any appeal to an intelligent creator is explicitly excluded for the paralysing effect it may have on scientific progress. Kitzmiller v. Dover Area School District was the first direct challenge brought in the United States federal courts against a public school district that required the presentation of intelligent design as an alternative to evolution. The plaintiffs successfully argued that intelligent design is a form of creationism, and that the school board policy thus violated the Establishment Clause of the First Amendment to the United States Constitution. Eleven parents of students in Dover, Pennsylvania, sued the Dover Area School District over a statement that the school board required be read aloud in ninth-grade science classes when evolution was taught. The plaintiffs were represented by the American Civil Liberties Union (ACLU), Americans United for Separation of Church and State (AU) and Pepper Hamilton LLP. The National Center for Science Education acted as consultants for the plaintiffs. The defendants were represented by the Thomas More Law Center. The suit was tried in a bench trial from September 26 to November 4, 2005, before Judge John E. Jones III. Kenneth R. Miller, Kevin Padian, Brian Alters, Robert T. Pennock, Barbara Forrest and John F. Haught served as expert witnesses for the plaintiffs. Michael Behe, Steve Fuller and Scott Minnich served as expert witnesses for the defense. On December 20, 2005, Judge Jones issued his 139-page findings of fact and decision, ruling that the Dover mandate was unconstitutional, and barring intelligent design from being taught in Pennsylvania's Middle District public school science classrooms. On November 8, 2005, there had been an election in which the eight Dover school board members who voted for the intelligent design requirement were all defeated by challengers who opposed the teaching of intelligent design in a science class, and the current school board president stated that the board did not intend to appeal the ruling. In his finding of facts, Judge Jones made the following condemnation of the "Teach the Controversy" strategy: Moreover, ID's backers have sought to avoid the scientific scrutiny which we have now determined that it cannot withstand by advocating that the controversy, but not ID itself, should be taught in science class. This tactic is at best disingenuous, and at worst a canard. The goal of the IDM is not to encourage critical thought, but to foment a revolution which would supplant evolutionary theory with ID. Judge Jones himself anticipated that his ruling would be criticized, saying in his decision that: Those who disagree with our holding will likely mark it as the product of an activist judge. If so, they will have erred as this is manifestly not an activist Court. Rather, this case came to us as the result of the activism of an ill-informed faction on a school board, aided by a national public interest law firm eager to find a constitutional test case on ID, who in combination drove the Board to adopt an imprudent and ultimately unconstitutional policy. The breathtaking inanity of the Board's decision is evident when considered against the factual backdrop which has now been fully revealed through this trial. The students, parents, and teachers of the Dover Area School District deserved better than to be dragged into this legal maelstrom, with its resulting utter waste of monetary and personal resources. As Jones had predicted, John G. West, Associate Director of the Center for Science and Culture, said: The Dover decision is an attempt by an activist federal judge to stop the spread of a scientific idea and even to prevent criticism of Darwinian evolution through government-imposed censorship rather than open debate, and it won't work. He has conflated Discovery Institute's position with that of the Dover school board, and he totally misrepresents intelligent design and the motivations of the scientists who research it. Newspapers have noted that the judge is "a Republican and a churchgoer". The decision has been examined in a search for flaws and conclusions, partly by intelligent design supporters aiming to avoid future defeats in court. In its Winter issue of 2007, the Montana Law Review published three articles. In the first, David K. DeWolf, John G. West and Casey Luskin, all of the Discovery Institute, argued that intelligent design is a valid scientific theory, the Jones court should not have addressed the question of whether it was a scientific theory, and that the Kitzmiller decision will have no effect at all on the development and adoption of intelligent design as an alternative to standard evolutionary theory. In the second Peter H. Irons responded, arguing that the decision was extremely well reasoned and spells the death knell for the intelligent design efforts to introduce creationism in public schools, while in the third, DeWolf, et al., answer the points made by Irons. However, fear of a similar lawsuit has resulted in other school boards abandoning intelligent design "teach the controversy" proposals. A number of anti-evolution bills have been introduced in the United States Congress and State legislatures since 2001, based largely upon language drafted by the Discovery Institute for the Santorum Amendment. Their aim has been to expose more students to articles and videos produced by advocates of intelligent design that criticise evolution. They have been presented as supporting "academic freedom", on the supposition that teachers, students, and college professors face intimidation and retaliation when discussing scientific criticisms of evolution, and therefore require protection. Critics of the legislation have pointed out that there are no credible scientific critiques of evolution, and an investigation in Florida of allegations of intimidation and retaliation found no evidence that it had occurred. The vast majority of the bills have been unsuccessful, with the one exception being Louisiana's Louisiana Science Education Act, which was enacted in 2008. In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K–12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as "Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning." However, these worldviews as well as others "that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others." In June 2007, the Council of Europe's Committee on Culture, Science and Education issued a report, The dangers of creationism in education, which states "Creationism in any of its forms, such as 'intelligent design', is not based on facts, does not use any scientific reasoning and its contents are pathetically inadequate for science classes." In describing the dangers posed to education by teaching creationism, it described intelligent design as "anti-science" and involving "blatant scientific fraud" and "intellectual deception" that "blurs the nature, objectives and limits of science" and links it and other forms of creationism to denialism. On October 4, 2007, the Council of Europe's Parliamentary Assembly approved a resolution stating that schools should "resist presentation of creationist ideas in any discipline other than religion", including "intelligent design", which it described as "the latest, more refined version of creationism", "presented in a more subtle way". The resolution emphasises that the aim of the report is not to question or to fight a belief, but to "warn against certain tendencies to pass off a belief as science". In the United Kingdom, public education includes religious education, and there are many faith schools that teach the ethos of particular denominations. When it was revealed that a group called Truth in Science had distributed DVDs produced by Illustra Media featuring Discovery Institute fellows making the case for design in nature, and claimed they were being used by 59 schools, the Department for Education and Skills (DfES) stated that "Neither creationism nor intelligent design are taught as a subject in schools, and are not specified in the science curriculum" (part of the National Curriculum, which does not apply to private schools or to education in Scotland). The DfES subsequently stated that "Intelligent design is not a recognised scientific theory; therefore, it is not included in the science curriculum", but left the way open for it to be explored in religious education in relation to different beliefs, as part of a syllabus set by a local Standing Advisory Council on Religious Education. In 2006, the Qualifications and Curriculum Authority produced a "Religious Education" model unit in which pupils can learn about religious and nonreligious views about creationism, intelligent design and evolution by natural selection. On June 25, 2007, the UK Government responded to an e-petition by saying that creationism and intelligent design should not be taught as science, though teachers would be expected to answer pupils' questions within the standard framework of established scientific theories. Detailed government "Creationism teaching guidance" for schools in England was published on September 18, 2007. It states that "Intelligent design lies wholly outside of science", has no underpinning scientific principles, or explanations, and is not accepted by the science community as a whole. Though it should not be taught as science, "Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory." However, "Teachers of subjects such as RE, history or citizenship may deal with creationism and intelligent design in their lessons." The British Centre for Science Education lobbying group has the goal of "countering creationism within the UK" and has been involved in government lobbying in the UK in this regard. Northern Ireland's Department for Education says that the curriculum provides an opportunity for alternative theories to be taught. The Democratic Unionist Party (DUP) – which has links to fundamentalist Christianity – has been campaigning to have intelligent design taught in science classes. A DUP former Member of Parliament, David Simpson, has sought assurances from the education minister that pupils will not lose marks if they give creationist or intelligent design answers to science questions. In 2007, Lisburn city council voted in favor of a DUP recommendation to write to post-primary schools asking what their plans are to develop teaching material in relation to "creation, intelligent design and other theories of origin". Plans by Dutch Education Minister Maria van der Hoeven to "stimulate an academic debate" on the subject in 2005 caused a severe public backlash. After the 2006 elections, she was succeeded by Ronald Plasterk, described as a "molecular geneticist, staunch atheist and opponent of intelligent design". As a reaction on this situation in the Netherlands, the Director General of the Flemish Secretariat of Catholic Education (VSKO [nl]) in Belgium, Mieke Van Hecke [nl], declared that: "Catholic scientists already accepted the theory of evolution for a long time and that intelligent design and creationism doesn't belong in Flemish Catholic schools. It's not the tasks of the politics to introduce new ideas, that's task and goal of science." The status of intelligent design in Australia is somewhat similar to that in the UK (see Education in Australia). In 2005, the Australian Minister for Education, Science and Training, Brendan Nelson, raised the notion of intelligent design being taught in science classes. The public outcry caused the minister to quickly concede that the correct forum for intelligent design, if it were to be taught, is in religion or philosophy classes. The Australian chapter of Campus Crusade for Christ distributed a DVD of the Discovery Institute's documentary Unlocking the Mystery of Life (2002) to Australian secondary schools. Tim Hawkes, the head of The King's School, one of Australia's leading private schools, supported use of the DVD in the classroom at the discretion of teachers and principals. Muzaffar Iqbal, a notable Pakistani-Canadian Muslim, signed "A Scientific Dissent From Darwinism", a petition from the Discovery Institute. Ideas similar to intelligent design have been considered respected intellectual options among Muslims, and in Turkey many intelligent design books have been translated. In Istanbul in 2007, public meetings promoting intelligent design were sponsored by the local government, and David Berlinski of the Discovery Institute was the keynote speaker at a meeting in May 2007. In 2011, the International Society for Krishna Consciousness (ISKCON) Bhaktivedanta Book Trust published an intelligent design book titled Rethinking Darwin: A Vedic Study of Darwinism and Intelligent Design. The book included contributions from intelligent design advocates William A. Dembski, Jonathan Wells and Michael Behe as well as from Hindu creationists Leif A. Jensen and Michael Cremo.
[ { "paragraph_id": 0, "text": "Intelligent design (ID) is a pseudoscientific argument for the existence of God, presented by its proponents as \"an evidence-based scientific theory about life's origins\". Proponents claim that \"certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection.\" ID is a form of creationism that lacks empirical support and offers no testable or tenable hypotheses, and is therefore not science. The leading proponents of ID are associated with the Discovery Institute, a Christian, politically conservative think tank based in the United States.", "title": "" }, { "paragraph_id": 1, "text": "Although the phrase intelligent design had featured previously in theological discussions of the argument from design, its first publication in its present use as an alternative term for creationism was in Of Pandas and People, a 1989 creationist textbook intended for high school biology classes. The term was substituted into drafts of the book, directly replacing references to creation science and creationism, after the 1987 Supreme Court's Edwards v. Aguillard decision barred the teaching of creation science in public schools on constitutional grounds. From the mid-1990s, the intelligent design movement (IDM), supported by the Discovery Institute, advocated inclusion of intelligent design in public school biology curricula. This led to the 2005 Kitzmiller v. Dover Area School District trial, which found that intelligent design was not science, that it \"cannot uncouple itself from its creationist, and thus religious, antecedents\", and that the public school district's promotion of it therefore violated the Establishment Clause of the First Amendment to the United States Constitution.", "title": "" }, { "paragraph_id": 2, "text": "ID presents two main arguments against evolutionary explanations: irreducible complexity and specified complexity, asserting that certain biological and informational features of living things are too complex to be the result of natural selection. Detailed scientific examination has rebutted several examples for which evolutionary explanations are claimed to be impossible.", "title": "" }, { "paragraph_id": 3, "text": "ID seeks to challenge the methodological naturalism inherent in modern science, though proponents concede that they have yet to produce a scientific theory. As a positive argument against evolution, ID proposes an analogy between natural systems and human artifacts, a version of the theological argument from design for the existence of God. ID proponents then conclude by analogy that the complex features, as defined by ID, are evidence of design. Critics of ID find a false dichotomy in the premise that evidence against evolution constitutes evidence for design.", "title": "" }, { "paragraph_id": 4, "text": "In 1910, evolution was not a topic of major religious controversy in America, but in the 1920s, the fundamentalist–modernist controversy in theology resulted in fundamentalist Christian opposition to teaching evolution and resulted in the origins of modern creationism. As a result, teaching of evolution was effectively suspended in U.S. public schools until the 1960s, and when evolution was then reintroduced into the curriculum, there was a series of court cases in which attempts were made to get creationism taught alongside evolution in science classes. Young Earth creationists (YECs) promoted \"creation science\" as \"an alternative scientific explanation of the world in which we live\". This frequently invoked the argument from design to explain complexity in nature as supposedly demonstrating the existence of God.", "title": "History" }, { "paragraph_id": 5, "text": "The argument from design, also known as the teleological argument or \"argument from intelligent design\", has been presented by theologists for centuries. A sufficiently succinct summary of the argument from design shows its unscientific, circular, and thereby illogical reasoning, for example as follows: \"Wherever complex design exists, there must have been a designer; nature is complex and therefore nature must have had an intelligent designer.\" Thomas Aquinas presented it in his fifth proof of God's existence as a syllogism. In 1802, William Paley's Natural Theology presented examples of intricate purpose in organisms. His version of the watchmaker analogy argued that a watch has evidently been designed by a craftsman and that it is supposedly just as evident that the complexity and adaptation seen in nature must have been designed. He went on to argue that the perfection and diversity of these designs supposedly shows the designer to be omnipotent and that this can supposedly only be the Christian god. Like \"creation science\", intelligent design centers on Paley's religious argument from design, but while Paley's natural theology was open to deistic design through God-given laws, intelligent design seeks scientific confirmation of repeated supposedly miraculous interventions in the history of life. \"Creation science\" prefigured the intelligent design arguments of irreducible complexity, even featuring the bacterial flagellum. In the United States, attempts to introduce \"creation science\" into schools led to court rulings that it is religious in nature and thus cannot be taught in public school science classrooms. Intelligent design is also presented as science and shares other arguments with \"creation science\" but avoids literal Biblical references to such topics as the biblical flood story or using Bible verses to estimate the age of the Earth.", "title": "History" }, { "paragraph_id": 6, "text": "Barbara Forrest writes that the intelligent design movement began in 1984 with the book The Mystery of Life's Origin: Reassessing Current Theories, co-written by the creationist and chemist Charles B. Thaxton and two other authors and published by Jon A. Buell's Foundation for Thought and Ethics.", "title": "History" }, { "paragraph_id": 7, "text": "In March 1986, Stephen C. Meyer published a review of this book, discussing how information theory could suggest that messages transmitted by DNA in the cell show \"specified complexity\" and must have been created by an intelligent agent. He also argued that science is based upon \"foundational assumptions\" of naturalism that were as much a matter of faith as those of \"creation theory\". In November of that year, Thaxton described his reasoning as a more sophisticated form of Paley's argument from design. At a conference that Thaxton held in 1988 (\"Sources of Information Content in DNA\"), he said that his intelligent cause view was compatible with both metaphysical naturalism and supernaturalism.", "title": "History" }, { "paragraph_id": 8, "text": "Intelligent design avoids identifying or naming the intelligent designer—it merely states that one (or more) must exist—but leaders of the movement have said the designer is the Christian God. Whether this lack of specificity about the designer's identity in public discussions is a genuine feature of the concept – or just a posture taken to avoid alienating those who would separate religion from the teaching of science – has been a matter of great debate between supporters and critics of intelligent design. The Kitzmiller v. Dover Area School District court ruling held the latter to be the case.", "title": "History" }, { "paragraph_id": 9, "text": "Since the Middle Ages, discussion of the religious \"argument from design\" or \"teleological argument\" in theology, with its concept of \"intelligent design\", has persistently referred to the theistic Creator God. Although ID proponents chose this provocative label for their proposed alternative to evolutionary explanations, they have de-emphasized their religious antecedents and denied that ID is natural theology, while still presenting ID as supporting the argument for the existence of God.", "title": "History" }, { "paragraph_id": 10, "text": "While intelligent design proponents have pointed out past examples of the phrase intelligent design that they said were not creationist and faith-based, they have failed to show that these usages had any influence on those who introduced the label in the intelligent design movement.", "title": "History" }, { "paragraph_id": 11, "text": "Variations on the phrase appeared in Young Earth creationist publications: a 1967 book co-written by Percival Davis referred to \"design according to which basic organisms were created\". In 1970, A. E. Wilder-Smith published The Creation of Life: A Cybernetic Approach to Evolution. The book defended Paley's design argument with computer calculations of the improbability of genetic sequences, which he said could not be explained by evolution but required \"the abhorred necessity of divine intelligent activity behind nature\", and that \"the same problem would be expected to beset the relationship between the designer behind nature and the intelligently designed part of nature known as man.\" In a 1984 article as well as in his affidavit to Edwards v. Aguillard, Dean H. Kenyon defended creation science by stating that \"biomolecular systems require intelligent design and engineering know-how\", citing Wilder-Smith. Creationist Richard B. Bliss used the phrase \"creative design\" in Origins: Two Models: Evolution, Creation (1976), and in Origins: Creation or Evolution (1988) wrote that \"while evolutionists are trying to find non-intelligent ways for life to occur, the creationist insists that an intelligent design must have been there in the first place.\" The first systematic use of the term, defined in a glossary and claimed to be other than creationism, was in Of Pandas and People, co-authored by Davis and Kenyon.", "title": "History" }, { "paragraph_id": 12, "text": "The most common modern use of the words \"intelligent design\" as a term intended to describe a field of inquiry began after the United States Supreme Court ruled in June 1987 in the case of Edwards v. Aguillard that it is unconstitutional for a state to require the teaching of creationism in public school science curricula.", "title": "History" }, { "paragraph_id": 13, "text": "A Discovery Institute report says that Charles B. Thaxton, editor of Pandas, had picked the phrase up from a NASA scientist, and thought, \"That's just what I need, it's a good engineering term.\" In two successive 1987 drafts of the book, over one hundred uses of the root word \"creation\", such as \"creationism\" and \"Creation Science\", were changed, almost without exception, to \"intelligent design\", while \"creationists\" was changed to \"design proponents\" or, in one instance, \"cdesign proponentsists\" [sic]. In June 1988, Thaxton held a conference titled \"Sources of Information Content in DNA\" in Tacoma, Washington. Stephen C. Meyer was at the conference, and later recalled that \"The term intelligent design came up...\" In December 1988 Thaxton decided to use the label \"intelligent design\" for his new creationist movement.", "title": "History" }, { "paragraph_id": 14, "text": "Of Pandas and People was published in 1989, and in addition to including all the current arguments for ID, was the first book to make systematic use of the terms \"intelligent design\" and \"design proponents\" as well as the phrase \"design theory\", defining the term intelligent design in a glossary and representing it as not being creationism. It thus represents the start of the modern intelligent design movement. \"Intelligent design\" was the most prominent of around fifteen new terms it introduced as a new lexicon of creationist terminology to oppose evolution without using religious language. It was the first place where the phrase \"intelligent design\" appeared in its primary present use, as stated both by its publisher Jon A. Buell, and by William A. Dembski in his expert witness report for Kitzmiller v. Dover Area School District.", "title": "History" }, { "paragraph_id": 15, "text": "The National Center for Science Education (NCSE) has criticized the book for presenting all of the basic arguments of intelligent design proponents and being actively promoted for use in public schools before any research had been done to support these arguments. Although presented as a scientific textbook, philosopher of science Michael Ruse considers the contents \"worthless and dishonest\". An American Civil Liberties Union lawyer described it as a political tool aimed at students who did not \"know science or understand the controversy over evolution and creationism\". One of the authors of the science framework used by California schools, Kevin Padian, condemned it for its \"sub-text\", \"intolerance for honest science\" and \"incompetence\".", "title": "History" }, { "paragraph_id": 16, "text": "The term \"irreducible complexity\" was introduced by biochemist Michael Behe in his 1996 book Darwin's Black Box, though he had already described the concept in his contributions to the 1993 revised edition of Of Pandas and People. Behe defines it as \"a single system which is composed of several well-matched interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning\".", "title": "Concepts" }, { "paragraph_id": 17, "text": "Behe uses the analogy of a mousetrap to illustrate this concept. A mousetrap consists of several interacting pieces—the base, the catch, the spring and the hammer—all of which must be in place for the mousetrap to work. Removal of any one piece destroys the function of the mousetrap. Intelligent design advocates assert that natural selection could not create irreducibly complex systems, because the selectable function is present only when all parts are assembled. Behe argued that irreducibly complex biological mechanisms include the bacterial flagellum of E. coli, the blood clotting cascade, cilia, and the adaptive immune system.", "title": "Concepts" }, { "paragraph_id": 18, "text": "Critics point out that the irreducible complexity argument assumes that the necessary parts of a system have always been necessary and therefore could not have been added sequentially. They argue that something that is at first merely advantageous can later become necessary as other components change. Furthermore, they argue, evolution often proceeds by altering preexisting parts or by removing them from a system, rather than by adding them. This is sometimes called the \"scaffolding objection\" by an analogy with scaffolding, which can support an \"irreducibly complex\" building until it is complete and able to stand on its own. Behe has acknowledged using \"sloppy prose\", and that his \"argument against Darwinism does not add up to a logical proof.\" Irreducible complexity has remained a popular argument among advocates of intelligent design; in the Dover trial, the court held that \"Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large.\"", "title": "Concepts" }, { "paragraph_id": 19, "text": "In 1986, Charles B. Thaxton, a physical chemist and creationist, used the term \"specified complexity\" from information theory when claiming that messages transmitted by DNA in the cell were specified by intelligence, and must have originated with an intelligent agent. The intelligent design concept of \"specified complexity\" was developed in the 1990s by mathematician, philosopher, and theologian William A. Dembski. Dembski states that when something exhibits specified complexity (i.e., is both complex and \"specified\", simultaneously), one can infer that it was produced by an intelligent cause (i.e., that it was designed) rather than being the result of natural processes. He provides the following examples: \"A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified.\" He states that details of living things can be similarly characterized, especially the \"patterns\" of molecular sequences in functional biological molecules such as DNA.", "title": "Concepts" }, { "paragraph_id": 20, "text": "Dembski defines complex specified information (CSI) as anything with a less than 1 in 10 chance of occurring by (natural) chance. Critics say that this renders the argument a tautology: complex specified information cannot occur naturally because Dembski has defined it thus, so the real question becomes whether or not CSI actually exists in nature.", "title": "Concepts" }, { "paragraph_id": 21, "text": "The conceptual soundness of Dembski's specified complexity/CSI argument has been discredited in the scientific and mathematical communities. Specified complexity has yet to be shown to have wide applications in other fields, as Dembski asserts. John Wilkins and Wesley R. Elsberry characterize Dembski's \"explanatory filter\" as eliminative because it eliminates explanations sequentially: first regularity, then chance, finally defaulting to design. They argue that this procedure is flawed as a model for scientific inference because the asymmetric way it treats the different possible explanations renders it prone to making false conclusions.", "title": "Concepts" }, { "paragraph_id": 22, "text": "Richard Dawkins, another critic of intelligent design, argues in The God Delusion (2006) that allowing for an intelligent designer to account for unlikely complexity only postpones the problem, as such a designer would need to be at least as complex. Other scientists have argued that evolution through selection is better able to explain the observed complexity, as is evident from the use of selective evolution to design certain electronic, aeronautic and automotive systems that are considered problems too complex for human \"intelligent designers\".", "title": "Concepts" }, { "paragraph_id": 23, "text": "Intelligent design proponents have also occasionally appealed to broader teleological arguments outside of biology, most notably an argument based on the fine-tuning of universal constants that make matter and life possible and that are argued not to be solely attributable to chance. These include the values of fundamental physical constants, the relative strength of nuclear forces, electromagnetism, and gravity between fundamental particles, as well as the ratios of masses of such particles. Intelligent design proponent and Center for Science and Culture fellow Guillermo Gonzalez argues that if any of these values were even slightly different, the universe would be dramatically different, making it impossible for many chemical elements and features of the Universe, such as galaxies, to form. Thus, proponents argue, an intelligent designer of life was needed to ensure that the requisite features were present to achieve that particular outcome.", "title": "Concepts" }, { "paragraph_id": 24, "text": "Scientists have generally responded that these arguments are poorly supported by existing evidence. Victor J. Stenger and other critics say both intelligent design and the weak form of the anthropic principle are essentially a tautology; in his view, these arguments amount to the claim that life is able to exist because the Universe is able to support life. The claim of the improbability of a life-supporting universe has also been criticized as an argument by lack of imagination for assuming no other forms of life are possible. Life as we know it might not exist if things were different, but a different sort of life might exist in its place. A number of critics also suggest that many of the stated variables appear to be interconnected and that calculations made by mathematicians and physicists suggest that the emergence of a universe similar to ours is quite probable.", "title": "Concepts" }, { "paragraph_id": 25, "text": "The contemporary intelligent design movement formulates its arguments in secular terms and intentionally avoids identifying the intelligent agent (or agents) they posit. Although they do not state that God is the designer, the designer is often implicitly hypothesized to have intervened in a way that only a god could intervene. Dembski, in The Design Inference (1998), speculates that an alien culture could fulfill these requirements. Of Pandas and People proposes that SETI illustrates an appeal to intelligent design in science. In 2000, philosopher of science Robert T. Pennock suggested the Raëlian UFO religion as a real-life example of an extraterrestrial intelligent designer view that \"make[s] many of the same bad arguments against evolutionary theory as creationists\". The authoritative description of intelligent design, however, explicitly states that the Universe displays features of having been designed. Acknowledging the paradox, Dembski concludes that \"no intelligent agent who is strictly physical could have presided over the origin of the universe or the origin of life.\" The leading proponents have made statements to their supporters that they believe the designer to be the Christian God, to the exclusion of all other religions.", "title": "Concepts" }, { "paragraph_id": 26, "text": "Beyond the debate over whether intelligent design is scientific, a number of critics argue that existing evidence makes the design hypothesis appear unlikely, irrespective of its status in the world of science. For example, Jerry Coyne asks why a designer would \"give us a pathway for making vitamin C, but then destroy it by disabling one of its enzymes\" (see pseudogene) and why a designer would not \"stock oceanic islands with reptiles, mammals, amphibians, and freshwater fish, despite the suitability of such islands for these species\". Coyne also points to the fact that \"the flora and fauna on those islands resemble that of the nearest mainland, even when the environments are very different\" as evidence that species were not placed there by a designer. Previously, in Darwin's Black Box, Behe had argued that we are simply incapable of understanding the designer's motives, so such questions cannot be answered definitively. Odd designs could, for example, \"...have been placed there by the designer for a reason—for artistic reasons, for variety, to show off, for some as-yet-undetected practical purpose, or for some unguessable reason—or they might not.\" Coyne responds that in light of the evidence, \"either life resulted not from intelligent design, but from evolution; or the intelligent designer is a cosmic prankster who designed everything to make it look as though it had evolved.\"", "title": "Concepts" }, { "paragraph_id": 27, "text": "Intelligent design proponents such as Paul Nelson avoid the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design. Behe cites Paley as his inspiration, but he differs from Paley's expectation of a perfect Creation and proposes that designers do not necessarily produce the best design they can. Behe suggests that, like a parent not wanting to spoil a child with extravagant toys, the designer can have multiple motives for not giving priority to excellence in engineering. He says that \"Another problem with the argument from imperfection is that it critically depends on a psychoanalysis of the unidentified designer. Yet the reasons that a designer would or would not do anything are virtually impossible to know unless the designer tells you specifically what those reasons are.\" This reliance on inexplicable motives of the designer makes intelligent design scientifically untestable. Retired UC Berkeley law professor, author and intelligent design advocate Phillip E. Johnson puts forward a core definition that the designer creates for a purpose, giving the example that in his view AIDS was created to punish immorality and is not caused by HIV, but such motives cannot be tested by scientific methods.", "title": "Concepts" }, { "paragraph_id": 28, "text": "Asserting the need for a designer of complexity also raises the question \"What designed the designer?\" Intelligent design proponents say that the question is irrelevant to or outside the scope of intelligent design. Richard Wein counters that \"...scientific explanations often create new unanswered questions. But, in assessing the value of an explanation, these questions are not irrelevant. They must be balanced against the improvements in our understanding which the explanation provides. Invoking an unexplained being to explain the origin of other beings (ourselves) is little more than question-begging. The new question raised by the explanation is as problematic as the question which the explanation purports to answer.\" Richard Dawkins sees the assertion that the designer does not need to be explained as a thought-terminating cliché. In the absence of observable, measurable evidence, the question \"What designed the designer?\" leads to an infinite regression from which intelligent design proponents can only escape by resorting to religious creationism or logical contradiction.", "title": "Concepts" }, { "paragraph_id": 29, "text": "The intelligent design movement is a direct outgrowth of the creationism of the 1980s. The scientific and academic communities, along with a U.S. federal court, view intelligent design as either a form of creationism or as a direct descendant that is closely intertwined with traditional creationism; and several authors explicitly refer to it as \"intelligent design creationism\".", "title": "Movement" }, { "paragraph_id": 30, "text": "The movement is headquartered in the Center for Science and Culture, established in 1996 as the creationist wing of the Discovery Institute to promote a religious agenda calling for broad social, academic and political changes. The Discovery Institute's intelligent design campaigns have been staged primarily in the United States, although efforts have been made in other countries to promote intelligent design. Leaders of the movement say intelligent design exposes the limitations of scientific orthodoxy and of the secular philosophy of naturalism. Intelligent design proponents allege that science should not be limited to naturalism and should not demand the adoption of a naturalistic philosophy that dismisses out-of-hand any explanation that includes a supernatural cause. The overall goal of the movement is to \"reverse the stifling dominance of the materialist worldview\" represented by the theory of evolution in favor of \"a science consonant with Christian and theistic convictions\".", "title": "Movement" }, { "paragraph_id": 31, "text": "Phillip E. Johnson stated that the goal of intelligent design is to cast creationism as a scientific concept. All leading intelligent design proponents are fellows or staff of the Discovery Institute and its Center for Science and Culture. Nearly all intelligent design concepts and the associated movement are the products of the Discovery Institute, which guides the movement and follows its wedge strategy while conducting its \"teach the controversy\" campaign and their other related programs.", "title": "Movement" }, { "paragraph_id": 32, "text": "Leading intelligent design proponents have made conflicting statements regarding intelligent design. In statements directed at the general public, they say intelligent design is not religious; when addressing conservative Christian supporters, they state that intelligent design has its foundation in the Bible. Recognizing the need for support, the Institute affirms its Christian, evangelistic orientation:", "title": "Movement" }, { "paragraph_id": 33, "text": "Alongside a focus on influential opinion-makers, we also seek to build up a popular base of support among our natural constituency, namely, Christians. We will do this primarily through apologetics seminars. We intend these to encourage and equip believers with new scientific evidences that support the faith, as well as to \"popularize\" our ideas in the broader culture.", "title": "Movement" }, { "paragraph_id": 34, "text": "Barbara Forrest, an expert who has written extensively on the movement, describes this as being due to the Discovery Institute's obfuscating its agenda as a matter of policy. She has written that the movement's \"activities betray an aggressive, systematic agenda for promoting not only intelligent design creationism, but the religious worldview that undergirds it.\"", "title": "Movement" }, { "paragraph_id": 35, "text": "Although arguments for intelligent design by the intelligent design movement are formulated in secular terms and intentionally avoid positing the identity of the designer, the majority of principal intelligent design advocates are publicly religious Christians who have stated that, in their view, the designer proposed in intelligent design is the Christian conception of God. Stuart Burgess, Phillip E. Johnson, William A. Dembski, and Stephen C. Meyer are evangelical Protestants; Michael Behe is a Roman Catholic; Paul Nelson supports young Earth creationism; and Jonathan Wells is a member of the Unification Church. Non-Christian proponents include David Klinghoffer, who is Jewish, Michael Denton and David Berlinski, who are agnostic, and Muzaffar Iqbal, a Pakistani-Canadian Muslim. Phillip E. Johnson has stated that cultivating ambiguity by employing secular language in arguments that are carefully crafted to avoid overtones of theistic creationism is a necessary first step for ultimately reintroducing the Christian concept of God as the designer. Johnson explicitly calls for intelligent design proponents to obfuscate their religious motivations so as to avoid having intelligent design identified \"as just another way of packaging the Christian evangelical message.\" Johnson emphasizes that \"...the first thing that has to be done is to get the Bible out of the discussion. ...This is not to say that the biblical issues are unimportant; the point is rather that the time to address them will be after we have separated materialist prejudice from scientific fact.\"", "title": "Movement" }, { "paragraph_id": 36, "text": "The strategy of deliberately disguising the religious intent of intelligent design has been described by William A. Dembski in The Design Inference. In this work, Dembski lists a god or an \"alien life force\" as two possible options for the identity of the designer; however, in his book Intelligent Design: The Bridge Between Science and Theology (1999), Dembski states:", "title": "Movement" }, { "paragraph_id": 37, "text": "Christ is indispensable to any scientific theory, even if its practitioners don't have a clue about him. The pragmatics of a scientific theory can, to be sure, be pursued without recourse to Christ. But the conceptual soundness of the theory can in the end only be located in Christ.", "title": "Movement" }, { "paragraph_id": 38, "text": "Dembski also stated, \"ID is part of God's general revelation ... Not only does intelligent design rid us of this ideology [materialism], which suffocates the human spirit, but, in my personal experience, I've found that it opens the path for people to come to Christ.\" Both Johnson and Dembski cite the Bible's Gospel of John as the foundation of intelligent design.", "title": "Movement" }, { "paragraph_id": 39, "text": "Barbara Forrest contends such statements reveal that leading proponents see intelligent design as essentially religious in nature, not merely a scientific concept that has implications with which their personal religious beliefs happen to coincide. She writes that the leading proponents of intelligent design are closely allied with the ultra-conservative Christian Reconstructionism movement. She lists connections of (current and former) Discovery Institute Fellows Phillip E. Johnson, Charles B. Thaxton, Michael Behe, Richard Weikart, Jonathan Wells and Francis J. Beckwith to leading Christian Reconstructionist organizations, and the extent of the funding provided the Institute by Howard Ahmanson, Jr., a leading figure in the Reconstructionist movement.", "title": "Movement" }, { "paragraph_id": 40, "text": "Not all creationist organizations have embraced the intelligent design movement. According to Thomas Dixon, \"Religious leaders have come out against ID too. An open letter affirming the compatibility of Christian faith and the teaching of evolution, first produced in response to controversies in Wisconsin in 2004, has now been signed by over ten thousand clergy from different Christian denominations across America.\" Hugh Ross of Reasons to Believe, a proponent of Old Earth creationism, believes that the efforts of intelligent design proponents to divorce the concept from Biblical Christianity make its hypothesis too vague. In 2002, he wrote: \"Winning the argument for design without identifying the designer yields, at best, a sketchy origins model. Such a model makes little if any positive impact on the community of scientists and other scholars. ... the time is right for a direct approach, a single leap into the origins fray. Introducing a biblically based, scientifically verifiable creation model represents such a leap.\"", "title": "Movement" }, { "paragraph_id": 41, "text": "Likewise, two of the most prominent YEC organizations in the world have attempted to distinguish their views from those of the intelligent design movement. Henry M. Morris of the Institute for Creation Research (ICR) wrote, in 1999, that ID, \"even if well-meaning and effectively articulated, will not work! It has often been tried in the past and has failed, and it will fail today. The reason it won't work is because it is not the Biblical method.\" According to Morris: \"The evidence of intelligent design ... must be either followed by or accompanied by a sound presentation of true Biblical creationism if it is to be meaningful and lasting.\" In 2002, Carl Wieland, then of Answers in Genesis (AiG), criticized design advocates who, though well-intentioned, \"'left the Bible out of it'\" and thereby unwittingly aided and abetted the modern rejection of the Bible. Wieland explained that \"AiG's major 'strategy' is to boldly, but humbly, call the church back to its Biblical foundations ... [so] we neither count ourselves a part of this movement nor campaign against it.\"", "title": "Movement" }, { "paragraph_id": 42, "text": "The unequivocal consensus in the scientific community is that intelligent design is not science and has no place in a science curriculum. The U.S. National Academy of Sciences has stated that \"creationism, intelligent design, and other claims of supernatural intervention in the origin of life or of species are not science because they are not testable by the methods of science.\" The U.S. National Science Teachers Association and the American Association for the Advancement of Science have termed it pseudoscience. Others in the scientific community have denounced its tactics, accusing the ID movement of manufacturing false attacks against evolution, of engaging in misinformation and misrepresentation about science, and marginalizing those who teach it. More recently, in September 2012, Bill Nye warned that creationist views threaten science education and innovations in the United States.", "title": "Movement" }, { "paragraph_id": 43, "text": "In 2001, the Discovery Institute published advertisements under the heading \"A Scientific Dissent From Darwinism\", with the claim that listed scientists had signed this statement expressing skepticism:", "title": "Movement" }, { "paragraph_id": 44, "text": "We are skeptical of claims for the ability of random mutation and natural selection to account for the complexity of life. Careful examination of the evidence for Darwinian theory should be encouraged.", "title": "Movement" }, { "paragraph_id": 45, "text": "The ambiguous statement did not exclude other known evolutionary mechanisms, and most signatories were not scientists in relevant fields, but starting in 2004 the Institute claimed the increasing number of signatures indicated mounting doubts about evolution among scientists. The statement formed a key component of Discovery Institute campaigns to present intelligent design as scientifically valid by claiming that evolution lacks broad scientific support, with Institute members continuing to cite the list through at least 2011. As part of a strategy to counter these claims, scientists organised Project Steve, which gained more signatories named Steve (or variants) than the Institute's petition, and a counter-petition, \"A Scientific Support for Darwinism\", which quickly gained similar numbers of signatories.", "title": "Movement" }, { "paragraph_id": 46, "text": "Several surveys were conducted prior to the December 2005 decision in Kitzmiller v. Dover School District, which sought to determine the level of support for intelligent design among certain groups. According to a 2005 Harris poll, 10% of adults in the United States viewed human beings as \"so complex that they required a powerful force or intelligent being to help create them.\" Although Zogby polls commissioned by the Discovery Institute show more support, these polls suffer from considerable flaws, such as having a low response rate (248 out of 16,000), being conducted on behalf of an organization with an expressed interest in the outcome of the poll, and containing leading questions.", "title": "Movement" }, { "paragraph_id": 47, "text": "The 2017 Gallup creationism survey found that 38% of adults in the United States hold the view that \"God created humans in their present form at one time within the last 10,000 years\" when asked for their views on the origin and development of human beings, which was noted as being at the lowest level in 35 years. Previously, a series of Gallup polls in the United States from 1982 through 2014 on \"Evolution, Creationism, Intelligent Design\" found support for \"human beings have developed over millions of years from less advanced formed of life, but God guided the process\" of between 31% and 40%, support for \"God created human beings in pretty much their present form at one time within the last 10,000 years or so\" varied from 40% to 47%, and support for \"human beings have developed over millions of years from less advanced forms of life, but God had no part in the process\" varied from 9% to 19%. The polls also noted answers to a series of more detailed questions.", "title": "Movement" }, { "paragraph_id": 48, "text": "There have been allegations that ID proponents have met discrimination, such as being refused tenure or being harshly criticized on the Internet. In the documentary film Expelled: No Intelligence Allowed, released in 2008, host Ben Stein presents five such cases. The film contends that the mainstream science establishment, in a \"scientific conspiracy to keep God out of the nation's laboratories and classrooms\", suppresses academics who believe they see evidence of intelligent design in nature or criticize evidence of evolution. Investigation into these allegations turned up alternative explanations for perceived persecution.", "title": "Movement" }, { "paragraph_id": 49, "text": "The film portrays intelligent design as motivated by science, rather than religion, though it does not give a detailed definition of the phrase or attempt to explain it on a scientific level. Other than briefly addressing issues of irreducible complexity, Expelled examines it as a political issue. The scientific theory of evolution is portrayed by the film as contributing to fascism, the Holocaust, communism, atheism, and eugenics.", "title": "Movement" }, { "paragraph_id": 50, "text": "Expelled has been used in private screenings to legislators as part of the Discovery Institute intelligent design campaign for Academic Freedom bills. Review screenings were restricted to churches and Christian groups, and at a special pre-release showing, one of the interviewees, PZ Myers, was refused admission. The American Association for the Advancement of Science describes the film as dishonest and divisive propaganda aimed at introducing religious ideas into public school science classrooms, and the Anti-Defamation League has denounced the film's allegation that evolutionary theory influenced the Holocaust. The film includes interviews with scientists and academics who were misled into taking part by misrepresentation of the topic and title of the film. Skeptic Michael Shermer describes his experience of being repeatedly asked the same question without context as \"surreal\".", "title": "Movement" }, { "paragraph_id": 51, "text": "Advocates of intelligent design seek to keep God and the Bible out of the discussion, and present intelligent design in the language of science as though it were a scientific hypothesis. For a theory to qualify as scientific, it is expected to be:", "title": "Criticism" }, { "paragraph_id": 52, "text": "For any theory, hypothesis, or conjecture to be considered scientific, it must meet most, and ideally all, of these criteria. The fewer criteria are met, the less scientific it is; if it meets only a few or none at all, then it cannot be treated as scientific in any meaningful sense of the word. Typical objections to defining intelligent design as science are that it lacks consistency, violates the principle of parsimony, is not scientifically useful, is not falsifiable, is not empirically testable, and is not correctable, dynamic, progressive, or provisional.", "title": "Criticism" }, { "paragraph_id": 53, "text": "Intelligent design proponents seek to change this fundamental basis of science by eliminating \"methodological naturalism\" from science and replacing it with what the leader of the intelligent design movement, Phillip E. Johnson, calls \"theistic realism\". Intelligent design proponents argue that naturalistic explanations fail to explain certain phenomena and that supernatural explanations provide a simple and intuitive explanation for the origins of life and the universe. Many intelligent design followers believe that \"scientism\" is itself a religion that promotes secularism and materialism in an attempt to erase theism from public life, and they view their work in the promotion of intelligent design as a way to return religion to a central role in education and other public spheres.", "title": "Criticism" }, { "paragraph_id": 54, "text": "It has been argued that methodological naturalism is not an assumption of science, but a result of science well done: the God explanation is the least parsimonious, so according to Occam's razor, it cannot be a scientific explanation.", "title": "Criticism" }, { "paragraph_id": 55, "text": "The failure to follow the procedures of scientific discourse and the failure to submit work to the scientific community that withstands scrutiny have weighed against intelligent design being accepted as valid science. The intelligent design movement has not published a properly peer-reviewed article supporting ID in a scientific journal, and has failed to publish supporting peer-reviewed research or data. The only article published in a peer-reviewed scientific journal that made a case for intelligent design was quickly withdrawn by the publisher for having circumvented the journal's peer-review standards. The Discovery Institute says that a number of intelligent design articles have been published in peer-reviewed journals, but critics, largely members of the scientific community, reject this claim and state intelligent design proponents have set up their own journals with peer review that lack impartiality and rigor, consisting entirely of intelligent design supporters.", "title": "Criticism" }, { "paragraph_id": 56, "text": "Further criticism stems from the fact that the phrase intelligent design makes use of an assumption of the quality of an observable intelligence, a concept that has no scientific consensus definition. The characteristics of intelligence are assumed by intelligent design proponents to be observable without specifying what the criteria for the measurement of intelligence should be. Critics say that the design detection methods proposed by intelligent design proponents are radically different from conventional design detection, undermining the key elements that make it possible as legitimate science. Intelligent design proponents, they say, are proposing both searching for a designer without knowing anything about that designer's abilities, parameters, or intentions (which scientists do know when searching for the results of human intelligence), as well as denying the distinction between natural/artificial design that allows scientists to compare complex designed artifacts against the background of the sorts of complexity found in nature.", "title": "Criticism" }, { "paragraph_id": 57, "text": "Among a significant proportion of the general public in the United States, the major concern is whether conventional evolutionary biology is compatible with belief in God and in the Bible, and how this issue is taught in schools. The Discovery Institute's \"teach the controversy\" campaign promotes intelligent design while attempting to discredit evolution in United States public high school science courses. The scientific community and science education organizations have replied that there is no scientific controversy regarding the validity of evolution and that the controversy exists solely in terms of religion and politics.", "title": "Criticism" }, { "paragraph_id": 58, "text": "Eugenie C. Scott, along with Glenn Branch and other critics, has argued that many points raised by intelligent design proponents are arguments from ignorance. In the argument from ignorance, a lack of evidence for one view is erroneously argued to constitute proof of the correctness of another view. Scott and Branch say that intelligent design is an argument from ignorance because it relies on a lack of knowledge for its conclusion: lacking a natural explanation for certain specific aspects of evolution, we assume intelligent cause. They contend most scientists would reply that the unexplained is not unexplainable, and that \"we don't know yet\" is a more appropriate response than invoking a cause outside science. Particularly, Michael Behe's demands for ever more detailed explanations of the historical evolution of molecular systems seem to assume a false dichotomy, where either evolution or design is the proper explanation, and any perceived failure of evolution becomes a victory for design. Scott and Branch also contend that the supposedly novel contributions proposed by intelligent design proponents have not served as the basis for any productive scientific research.", "title": "Criticism" }, { "paragraph_id": 59, "text": "In his conclusion to the Kitzmiller trial, Judge John E. Jones III wrote that \"ID is at bottom premised upon a false dichotomy, namely, that to the extent evolutionary theory is discredited, ID is confirmed.\" This same argument had been put forward to support creation science at the McLean v. Arkansas (1982) trial, which found it was \"contrived dualism\", the false premise of a \"two model approach\". Behe's argument of irreducible complexity puts forward negative arguments against evolution but does not make any positive scientific case for intelligent design. It fails to allow for scientific explanations continuing to be found, as has been the case with several examples previously put forward as supposed cases of irreducible complexity.", "title": "Criticism" }, { "paragraph_id": 60, "text": "Intelligent design proponents often insist that their claims do not require a religious component. However, various philosophical and theological issues are naturally raised by the claims of intelligent design.", "title": "Criticism" }, { "paragraph_id": 61, "text": "Intelligent design proponents attempt to demonstrate scientifically that features such as irreducible complexity and specified complexity could not arise through natural processes, and therefore required repeated direct miraculous interventions by a Designer (often a Christian concept of God). They reject the possibility of a Designer who works merely through setting natural laws in motion at the outset, in contrast to theistic evolution (to which even Charles Darwin was open). Intelligent design is distinct because it asserts repeated miraculous interventions in addition to designed laws. This contrasts with other major religious traditions of a created world in which God's interactions and influences do not work in the same way as physical causes. The Roman Catholic tradition makes a careful distinction between ultimate metaphysical explanations and secondary, natural causes.", "title": "Criticism" }, { "paragraph_id": 62, "text": "The concept of direct miraculous intervention raises other potential theological implications. If such a Designer does not intervene to alleviate suffering even though capable of intervening for other reasons, some imply the designer is not omnibenevolent (see problem of evil and related theodicy).", "title": "Criticism" }, { "paragraph_id": 63, "text": "Further, repeated interventions imply that the original design was not perfect and final, and thus pose a problem for any who believe that the Creator's work had been both perfect and final. Intelligent design proponents seek to explain the problem of poor design in nature by insisting that we have simply failed to understand the perfection of the design (for example, proposing that vestigial organs have unknown purposes), or by proposing that designers do not necessarily produce the best design they can, and may have unknowable motives for their actions.", "title": "Criticism" }, { "paragraph_id": 64, "text": "In 2005, the director of the Vatican Observatory, the Jesuit astronomer George Coyne, set out theological reasons for accepting evolution in an August 2005 article in The Tablet, and said that \"Intelligent design isn't science even though it pretends to be. If you want to teach it in schools, intelligent design should be taught when religion or cultural history is taught, not science.\" In 2006, he \"condemned ID as a kind of 'crude creationism' which reduced God to a mere engineer.\"", "title": "Criticism" }, { "paragraph_id": 65, "text": "Critics state that the wedge strategy's \"ultimate goal is to create a theocratic state\".", "title": "Criticism" }, { "paragraph_id": 66, "text": "Intelligent design has also been characterized as a God-of-the-gaps argument, which has the following form:", "title": "Criticism" }, { "paragraph_id": 67, "text": "A God-of-the-gaps argument is the theological version of an argument from ignorance. A key feature of this type of argument is that it merely answers outstanding questions with explanations (often supernatural) that are unverifiable and ultimately themselves subject to unanswerable questions. Historians of science observe that the astronomy of the earliest civilizations, although astonishing and incorporating mathematical constructions far in excess of any practical value, proved to be misdirected and of little importance to the development of science because they failed to inquire more carefully into the mechanisms that drove the heavenly bodies across the sky. It was the Greek civilization that first practiced science, although not yet as a formally defined experimental science, but nevertheless an attempt to rationalize the world of natural experience without recourse to divine intervention. In this historically motivated definition of science any appeal to an intelligent creator is explicitly excluded for the paralysing effect it may have on scientific progress.", "title": "Criticism" }, { "paragraph_id": 68, "text": "Kitzmiller v. Dover Area School District was the first direct challenge brought in the United States federal courts against a public school district that required the presentation of intelligent design as an alternative to evolution. The plaintiffs successfully argued that intelligent design is a form of creationism, and that the school board policy thus violated the Establishment Clause of the First Amendment to the United States Constitution.", "title": "Legal challenges in the United States" }, { "paragraph_id": 69, "text": "Eleven parents of students in Dover, Pennsylvania, sued the Dover Area School District over a statement that the school board required be read aloud in ninth-grade science classes when evolution was taught. The plaintiffs were represented by the American Civil Liberties Union (ACLU), Americans United for Separation of Church and State (AU) and Pepper Hamilton LLP. The National Center for Science Education acted as consultants for the plaintiffs. The defendants were represented by the Thomas More Law Center. The suit was tried in a bench trial from September 26 to November 4, 2005, before Judge John E. Jones III. Kenneth R. Miller, Kevin Padian, Brian Alters, Robert T. Pennock, Barbara Forrest and John F. Haught served as expert witnesses for the plaintiffs. Michael Behe, Steve Fuller and Scott Minnich served as expert witnesses for the defense.", "title": "Legal challenges in the United States" }, { "paragraph_id": 70, "text": "On December 20, 2005, Judge Jones issued his 139-page findings of fact and decision, ruling that the Dover mandate was unconstitutional, and barring intelligent design from being taught in Pennsylvania's Middle District public school science classrooms. On November 8, 2005, there had been an election in which the eight Dover school board members who voted for the intelligent design requirement were all defeated by challengers who opposed the teaching of intelligent design in a science class, and the current school board president stated that the board did not intend to appeal the ruling.", "title": "Legal challenges in the United States" }, { "paragraph_id": 71, "text": "In his finding of facts, Judge Jones made the following condemnation of the \"Teach the Controversy\" strategy:", "title": "Legal challenges in the United States" }, { "paragraph_id": 72, "text": "Moreover, ID's backers have sought to avoid the scientific scrutiny which we have now determined that it cannot withstand by advocating that the controversy, but not ID itself, should be taught in science class. This tactic is at best disingenuous, and at worst a canard. The goal of the IDM is not to encourage critical thought, but to foment a revolution which would supplant evolutionary theory with ID.", "title": "Legal challenges in the United States" }, { "paragraph_id": 73, "text": "Judge Jones himself anticipated that his ruling would be criticized, saying in his decision that:", "title": "Legal challenges in the United States" }, { "paragraph_id": 74, "text": "Those who disagree with our holding will likely mark it as the product of an activist judge. If so, they will have erred as this is manifestly not an activist Court. Rather, this case came to us as the result of the activism of an ill-informed faction on a school board, aided by a national public interest law firm eager to find a constitutional test case on ID, who in combination drove the Board to adopt an imprudent and ultimately unconstitutional policy. The breathtaking inanity of the Board's decision is evident when considered against the factual backdrop which has now been fully revealed through this trial. The students, parents, and teachers of the Dover Area School District deserved better than to be dragged into this legal maelstrom, with its resulting utter waste of monetary and personal resources.", "title": "Legal challenges in the United States" }, { "paragraph_id": 75, "text": "As Jones had predicted, John G. West, Associate Director of the Center for Science and Culture, said:", "title": "Legal challenges in the United States" }, { "paragraph_id": 76, "text": "The Dover decision is an attempt by an activist federal judge to stop the spread of a scientific idea and even to prevent criticism of Darwinian evolution through government-imposed censorship rather than open debate, and it won't work. He has conflated Discovery Institute's position with that of the Dover school board, and he totally misrepresents intelligent design and the motivations of the scientists who research it.", "title": "Legal challenges in the United States" }, { "paragraph_id": 77, "text": "Newspapers have noted that the judge is \"a Republican and a churchgoer\".", "title": "Legal challenges in the United States" }, { "paragraph_id": 78, "text": "The decision has been examined in a search for flaws and conclusions, partly by intelligent design supporters aiming to avoid future defeats in court. In its Winter issue of 2007, the Montana Law Review published three articles. In the first, David K. DeWolf, John G. West and Casey Luskin, all of the Discovery Institute, argued that intelligent design is a valid scientific theory, the Jones court should not have addressed the question of whether it was a scientific theory, and that the Kitzmiller decision will have no effect at all on the development and adoption of intelligent design as an alternative to standard evolutionary theory. In the second Peter H. Irons responded, arguing that the decision was extremely well reasoned and spells the death knell for the intelligent design efforts to introduce creationism in public schools, while in the third, DeWolf, et al., answer the points made by Irons. However, fear of a similar lawsuit has resulted in other school boards abandoning intelligent design \"teach the controversy\" proposals.", "title": "Legal challenges in the United States" }, { "paragraph_id": 79, "text": "A number of anti-evolution bills have been introduced in the United States Congress and State legislatures since 2001, based largely upon language drafted by the Discovery Institute for the Santorum Amendment. Their aim has been to expose more students to articles and videos produced by advocates of intelligent design that criticise evolution. They have been presented as supporting \"academic freedom\", on the supposition that teachers, students, and college professors face intimidation and retaliation when discussing scientific criticisms of evolution, and therefore require protection. Critics of the legislation have pointed out that there are no credible scientific critiques of evolution, and an investigation in Florida of allegations of intimidation and retaliation found no evidence that it had occurred. The vast majority of the bills have been unsuccessful, with the one exception being Louisiana's Louisiana Science Education Act, which was enacted in 2008.", "title": "Legal challenges in the United States" }, { "paragraph_id": 80, "text": "In April 2010, the American Academy of Religion issued Guidelines for Teaching About Religion in K–12 Public Schools in the United States, which included guidance that creation science or intelligent design should not be taught in science classes, as \"Creation science and intelligent design represent worldviews that fall outside of the realm of science that is defined as (and limited to) a method of inquiry based on gathering observable and measurable evidence subject to specific principles of reasoning.\" However, these worldviews as well as others \"that focus on speculation regarding the origins of life represent another important and relevant form of human inquiry that is appropriately studied in literature or social sciences courses. Such study, however, must include a diversity of worldviews representing a variety of religious and philosophical perspectives and must avoid privileging one view as more legitimate than others.\"", "title": "Legal challenges in the United States" }, { "paragraph_id": 81, "text": "In June 2007, the Council of Europe's Committee on Culture, Science and Education issued a report, The dangers of creationism in education, which states \"Creationism in any of its forms, such as 'intelligent design', is not based on facts, does not use any scientific reasoning and its contents are pathetically inadequate for science classes.\" In describing the dangers posed to education by teaching creationism, it described intelligent design as \"anti-science\" and involving \"blatant scientific fraud\" and \"intellectual deception\" that \"blurs the nature, objectives and limits of science\" and links it and other forms of creationism to denialism. On October 4, 2007, the Council of Europe's Parliamentary Assembly approved a resolution stating that schools should \"resist presentation of creationist ideas in any discipline other than religion\", including \"intelligent design\", which it described as \"the latest, more refined version of creationism\", \"presented in a more subtle way\". The resolution emphasises that the aim of the report is not to question or to fight a belief, but to \"warn against certain tendencies to pass off a belief as science\".", "title": "Status outside the United States" }, { "paragraph_id": 82, "text": "In the United Kingdom, public education includes religious education, and there are many faith schools that teach the ethos of particular denominations. When it was revealed that a group called Truth in Science had distributed DVDs produced by Illustra Media featuring Discovery Institute fellows making the case for design in nature, and claimed they were being used by 59 schools, the Department for Education and Skills (DfES) stated that \"Neither creationism nor intelligent design are taught as a subject in schools, and are not specified in the science curriculum\" (part of the National Curriculum, which does not apply to private schools or to education in Scotland). The DfES subsequently stated that \"Intelligent design is not a recognised scientific theory; therefore, it is not included in the science curriculum\", but left the way open for it to be explored in religious education in relation to different beliefs, as part of a syllabus set by a local Standing Advisory Council on Religious Education. In 2006, the Qualifications and Curriculum Authority produced a \"Religious Education\" model unit in which pupils can learn about religious and nonreligious views about creationism, intelligent design and evolution by natural selection.", "title": "Status outside the United States" }, { "paragraph_id": 83, "text": "On June 25, 2007, the UK Government responded to an e-petition by saying that creationism and intelligent design should not be taught as science, though teachers would be expected to answer pupils' questions within the standard framework of established scientific theories. Detailed government \"Creationism teaching guidance\" for schools in England was published on September 18, 2007. It states that \"Intelligent design lies wholly outside of science\", has no underpinning scientific principles, or explanations, and is not accepted by the science community as a whole. Though it should not be taught as science, \"Any questions about creationism and intelligent design which arise in science lessons, for example as a result of media coverage, could provide the opportunity to explain or explore why they are not considered to be scientific theories and, in the right context, why evolution is considered to be a scientific theory.\" However, \"Teachers of subjects such as RE, history or citizenship may deal with creationism and intelligent design in their lessons.\"", "title": "Status outside the United States" }, { "paragraph_id": 84, "text": "The British Centre for Science Education lobbying group has the goal of \"countering creationism within the UK\" and has been involved in government lobbying in the UK in this regard. Northern Ireland's Department for Education says that the curriculum provides an opportunity for alternative theories to be taught. The Democratic Unionist Party (DUP) – which has links to fundamentalist Christianity – has been campaigning to have intelligent design taught in science classes. A DUP former Member of Parliament, David Simpson, has sought assurances from the education minister that pupils will not lose marks if they give creationist or intelligent design answers to science questions. In 2007, Lisburn city council voted in favor of a DUP recommendation to write to post-primary schools asking what their plans are to develop teaching material in relation to \"creation, intelligent design and other theories of origin\".", "title": "Status outside the United States" }, { "paragraph_id": 85, "text": "Plans by Dutch Education Minister Maria van der Hoeven to \"stimulate an academic debate\" on the subject in 2005 caused a severe public backlash. After the 2006 elections, she was succeeded by Ronald Plasterk, described as a \"molecular geneticist, staunch atheist and opponent of intelligent design\". As a reaction on this situation in the Netherlands, the Director General of the Flemish Secretariat of Catholic Education (VSKO [nl]) in Belgium, Mieke Van Hecke [nl], declared that: \"Catholic scientists already accepted the theory of evolution for a long time and that intelligent design and creationism doesn't belong in Flemish Catholic schools. It's not the tasks of the politics to introduce new ideas, that's task and goal of science.\"", "title": "Status outside the United States" }, { "paragraph_id": 86, "text": "The status of intelligent design in Australia is somewhat similar to that in the UK (see Education in Australia). In 2005, the Australian Minister for Education, Science and Training, Brendan Nelson, raised the notion of intelligent design being taught in science classes. The public outcry caused the minister to quickly concede that the correct forum for intelligent design, if it were to be taught, is in religion or philosophy classes. The Australian chapter of Campus Crusade for Christ distributed a DVD of the Discovery Institute's documentary Unlocking the Mystery of Life (2002) to Australian secondary schools. Tim Hawkes, the head of The King's School, one of Australia's leading private schools, supported use of the DVD in the classroom at the discretion of teachers and principals.", "title": "Status outside the United States" }, { "paragraph_id": 87, "text": "Muzaffar Iqbal, a notable Pakistani-Canadian Muslim, signed \"A Scientific Dissent From Darwinism\", a petition from the Discovery Institute. Ideas similar to intelligent design have been considered respected intellectual options among Muslims, and in Turkey many intelligent design books have been translated. In Istanbul in 2007, public meetings promoting intelligent design were sponsored by the local government, and David Berlinski of the Discovery Institute was the keynote speaker at a meeting in May 2007.", "title": "Status outside the United States" }, { "paragraph_id": 88, "text": "In 2011, the International Society for Krishna Consciousness (ISKCON) Bhaktivedanta Book Trust published an intelligent design book titled Rethinking Darwin: A Vedic Study of Darwinism and Intelligent Design. The book included contributions from intelligent design advocates William A. Dembski, Jonathan Wells and Michael Behe as well as from Hindu creationists Leif A. Jensen and Michael Cremo.", "title": "Status outside the United States" } ]
Intelligent design (ID) is a pseudoscientific argument for the existence of God, presented by its proponents as "an evidence-based scientific theory about life's origins". Proponents claim that "certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection." ID is a form of creationism that lacks empirical support and offers no testable or tenable hypotheses, and is therefore not science. The leading proponents of ID are associated with the Discovery Institute, a Christian, politically conservative think tank based in the United States. Although the phrase intelligent design had featured previously in theological discussions of the argument from design, its first publication in its present use as an alternative term for creationism was in Of Pandas and People, a 1989 creationist textbook intended for high school biology classes. The term was substituted into drafts of the book, directly replacing references to creation science and creationism, after the 1987 Supreme Court's Edwards v. Aguillard decision barred the teaching of creation science in public schools on constitutional grounds. From the mid-1990s, the intelligent design movement (IDM), supported by the Discovery Institute, advocated inclusion of intelligent design in public school biology curricula. This led to the 2005 Kitzmiller v. Dover Area School District trial, which found that intelligent design was not science, that it "cannot uncouple itself from its creationist, and thus religious, antecedents", and that the public school district's promotion of it therefore violated the Establishment Clause of the First Amendment to the United States Constitution. ID presents two main arguments against evolutionary explanations: irreducible complexity and specified complexity, asserting that certain biological and informational features of living things are too complex to be the result of natural selection. Detailed scientific examination has rebutted several examples for which evolutionary explanations are claimed to be impossible. ID seeks to challenge the methodological naturalism inherent in modern science, though proponents concede that they have yet to produce a scientific theory. As a positive argument against evolution, ID proposes an analogy between natural systems and human artifacts, a version of the theological argument from design for the existence of God. ID proponents then conclude by analogy that the complex features, as defined by ID, are evidence of design. Critics of ID find a false dichotomy in the premise that evidence against evolution constitutes evidence for design.
2001-11-29T17:18:04Z
2023-12-29T23:27:42Z
[ "Template:About", "Template:Cite journal", "Template:Cite book", "Template:Cite hansard", "Template:Cite thesis", "Template:Refbegin", "Template:Refend", "Template:Short description", "Template:Main", "Template:Cite web", "Template:Cite conference", "Template:Webarchive", "Template:Cite press release", "Template:Navboxes", "Template:Pp-move-indef", "Template:Use mdy dates", "Template:Snd", "Template:Div col", "Template:Div col end", "Template:Reflist", "Template:Distinguish", "Template:Interlanguage link", "Template:Cite episode", "Template:Sic", "Template:See also", "Template:YouTube", "Template:Cbignore", "Template:Cite encyclopedia", "Template:Featured article", "Template:Cite news", "Template:Authority control", "Template:Blockquote", "Template:Excessive citations inline", "Template:Cite court", "Template:Cite magazine", "Template:Cite interview", "Template:Intelligent Design", "Template:Cite AV media", "Template:Portal bar", "Template:Pp-vandalism" ]
https://en.wikipedia.org/wiki/Intelligent_design
15,302
Integrin
Integrins are transmembrane receptors that help cell-cell and cell-extracellular matrix (ECM) adhesion. Upon ligand binding, integrins activate signal transduction pathways that mediate cellular signals such as regulation of the cell cycle, organization of the intracellular cytoskeleton, and movement of new receptors to the cell membrane. The presence of integrins allows rapid and flexible responses to events at the cell surface (e.g. signal platelets to initiate an interaction with coagulation factors). Several types of integrins exist, and one cell generally has multiple different types on its surface. Integrins are found in all animals while integrin-like receptors are found in plant cells. Integrins work alongside other proteins such as cadherins, the immunoglobulin superfamily cell adhesion molecules, selectins and syndecans, to mediate cell–cell and cell–matrix interaction. Ligands for integrins include fibronectin, vitronectin, collagen and laminin. Integrins are obligate heterodimers composed of α and β subunits. Several genes code for multiple isoforms of these subunits, which gives rise to an array of unique integrins with varied activity. In mammals, integrins are assembled from eighteen α and eight β subunits, in Drosophila five α and two β subunits, and in Caenorhabditis nematodes two α subunits and one β subunit. The α and β subunits are both class I transmembrane proteins, so each penetrates the plasma membrane once, and can possess several cytoplasmic domains. Variants of some subunits are formed by differential RNA splicing; for example, four variants of the beta-1 subunit exist. Through different combinations of the α and β subunits, 24 unique mammalian integrins are generated, excluding splice- and glycosylation variants. Integrin subunits span the cell membrane and have short cytoplasmic domains of 40–70 amino acids. The exception is the beta-4 subunit, which has a cytoplasmic domain of 1,088 amino acids, one of the largest of any membrane protein. Outside the cell membrane, the α and β chains lie close together along a length of about 23 nm; the final 5 nm N-termini of each chain forms a ligand-binding region for the ECM. They have been compared to lobster claws, although they don't actually "pinch" their ligand, they chemically interact with it at the insides of the "tips" of their "pinchers". The molecular mass of the integrin subunits can vary from 90 kDa to 160 kDa. Beta subunits have four cysteine-rich repeated sequences. Both α and β subunits bind several divalent cations. The role of divalent cations in the α subunit is unknown, but may stabilize the folds of the protein. The cations in the β subunits are more interesting: they are directly involved in coordinating at least some of the ligands that integrins bind. Integrins can be categorized in multiple ways. For example, some α chains have an additional structural element (or "domain") inserted toward the N-terminal, the alpha-A domain (so called because it has a similar structure to the A-domains found in the protein von Willebrand factor; it is also termed the α-I domain). Integrins carrying this domain either bind to collagens (e.g. integrins α1 β1, and α2 β1), or act as cell-cell adhesion molecules (integrins of the β2 family). This α-I domain is the binding site for ligands of such integrins. Those integrins that don't carry this inserted domain also have an A-domain in their ligand binding site, but this A-domain is found on the β subunit. In both cases, the A-domains carry up to three divalent cation binding sites. One is permanently occupied in physiological concentrations of divalent cations, and carries either a calcium or magnesium ion, the principal divalent cations in blood at median concentrations of 1.4 mM (calcium) and 0.8 mM (magnesium). The other two sites become occupied by cations when ligands bind—at least for those ligands involving an acidic amino acid in their interaction sites. An acidic amino acid features in the integrin-interaction site of many ECM proteins, for example as part of the amino acid sequence Arginine-Glycine-Aspartic acid ("RGD" in the one-letter amino acid code). Despite many years of effort, discovering the high-resolution structure of integrins proved to be challenging, as membrane proteins are classically difficult to purify, and as integrins are large, complex and highly glycosylated with many sugar 'trees' attached to them. Low-resolution images of detergent extracts of intact integrin GPIIbIIIa, obtained using electron microscopy, and even data from indirect techniques that investigate the solution properties of integrins using ultracentrifugation and light scattering, were combined with fragmentary high-resolution crystallographic or NMR data from single or paired domains of single integrin chains, and molecular models postulated for the rest of the chains. The X-ray crystal structure obtained for the complete extracellular region of one integrin, αvβ3, shows the molecule to be folded into an inverted V-shape that potentially brings the ligand-binding sites close to the cell membrane. Perhaps more importantly, the crystal structure was also obtained for the same integrin bound to a small ligand containing the RGD-sequence, the drug cilengitide. As detailed above, this finally revealed why divalent cations (in the A-domains) are critical for RGD-ligand binding to integrins. The interaction of such sequences with integrins is believed to be a primary switch by which ECM exerts its effects on cell behaviour. The structure poses many questions, especially regarding ligand binding and signal transduction. The ligand binding site is directed towards the C-terminal of the integrin, the region where the molecule emerges from the cell membrane. If it emerges orthogonally from the membrane, the ligand binding site would apparently be obstructed, especially as integrin ligands are typically massive and well cross-linked components of the ECM. In fact, little is known about the angle that membrane proteins subtend to the plane of the membrane; this is a problem difficult to address with available technologies. The default assumption is that they emerge rather like little lollipops, but there is little evidence for this. The integrin structure has drawn attention to this problem, which may have general implications for how membrane proteins work. It appears that the integrin transmembrane helices are tilted (see "Activation" below), which hints that the extracellular chains may also not be orthogonal with respect to the membrane surface. Although the crystal structure changed surprisingly little after binding to cilengitide, the current hypothesis is that integrin function involves changes in shape to move the ligand-binding site into a more accessible position, away from the cell surface, and this shape change also triggers intracellular signaling. There is a wide body of cell-biological and biochemical literature that supports this view. Perhaps the most convincing evidence involves the use of antibodies that only recognize integrins when they have bound to their ligands, or are activated. As the "footprint" that an antibody makes on its binding target is roughly a circle about 3 nm in diameter, the resolution of this technique is low. Nevertheless, these so-called LIBS (Ligand-Induced-Binding-Sites) antibodies unequivocally show that dramatic changes in integrin shape routinely occur. However, how the changes detected with antibodies look on the structure is still unknown. When released into the cell membrane, newly synthesized integrin dimers are speculated to be found in the same "bent" conformation revealed by the structural studies described above. One school of thought claims that this bent form prevents them from interacting with their ligands, although bent forms can predominate in high-resolution EM structures of integrin bound to an ECM ligand. Therefore, at least in biochemical experiments, integrin dimers must apparently not be 'unbent' in order to prime them and allow their binding to the ECM. In cells, the priming is accomplished by a protein talin, which binds to the β tail of the integrin dimer and changes its conformation. The α and β integrin chains are both class-I transmembrane proteins: they pass the plasma membrane as single transmembrane alpha-helices. Unfortunately, the helices are too long, and recent studies suggest that, for integrin gpIIbIIIa, they are tilted with respect both to one another and to the plane of the membrane. Talin binding alters the angle of tilt of the β3 chain transmembrane helix in model systems and this may reflect a stage in the process of inside-out signalling which primes integrins. Moreover, talin proteins are able to dimerize and thus are thought to intervene in the clustering of integrin dimers which leads to the formation of a focal adhesion. Recently, the Kindlin-1 and Kindlin-2 proteins have also been found to interact with integrin and activate it. Integrins have two main functions, attachment of the cells to the ECM and signal transduction from the ECM to the cells. They are also involved in a wide range of other biological activities, including extravasation, cell-to-cell adhesion, cell migration, and as receptors for certain viruses, such as adenovirus, echovirus, hantavirus, and foot-and-mouth disease, polio virus and other viruses. Recently, the importance of integrins in the progress of autoimmune disorders is also gaining attention of the scientists. These mechanoreceptors seem to regulate autoimmunity by dictating various intracellular pathways to control immune cell adhesion to endothelial cell layers followed by their trans-migration. This process might or might not be dependent on the sheer force faced by the extracellular parts of different integrins. A prominent function of the integrins is seen in the molecule GpIIb/IIIa, an integrin on the surface of blood platelets (thrombocytes) responsible for attachment to fibrin within a developing blood clot. This molecule dramatically increases its binding affinity for fibrin/fibrinogen through association of platelets with exposed collagens in the wound site. Upon association of platelets with collagen, GPIIb/IIIa changes shape, allowing it to bind to fibrin and other blood components to form the clot matrix and stop blood loss. Integrins couple the cell-extracellular matrix (ECM) outside a cell to the cytoskeleton (in particular, the microfilaments) inside the cell. Which ligand in the ECM the integrin can bind to is defined by which α and β subunits the integrin is made of. Among the ligands of integrins are fibronectin, vitronectin, collagen, and laminin. The connection between the cell and the ECM may help the cell to endure pulling forces without being ripped out of the ECM. The ability of a cell to create this kind of bond is also of vital importance in ontogeny. Cell attachment to the ECM is a basic requirement to build a multicellular organism. Integrins are not simply hooks, but give the cell critical signals about the nature of its surroundings. Together with signals arising from receptors for soluble growth factors like VEGF, EGF, and many others, they enforce a cellular decision on what biological action to take, be it attachment, movement, death, or differentiation. Thus integrins lie at the heart of many cellular biological processes. The attachment of the cell takes place through formation of cell adhesion complexes, which consist of integrins and many cytoplasmic proteins, such as talin, vinculin, paxillin, and alpha-actinin. These act by regulating kinases such as FAK (focal adhesion kinase) and Src kinase family members to phosphorylate substrates such as p130CAS thereby recruiting signaling adaptors such as CRK. These adhesion complexes attach to the actin cytoskeleton. The integrins thus serve to link two networks across the plasma membrane: the extracellular ECM and the intracellular actin filamentous system. Integrin α6β4 is an exception: it links to the keratin intermediate filament system in epithelial cells. Focal adhesions are large molecular complexes, which are generated following interaction of integrins with ECM, then their clustering. The clusters likely provide sufficient intracellular binding sites to permit the formation of stable signaling complexes on the cytoplasmic side of the cell membrane. So the focal adhesions contain integrin ligand, integrin molecule, and associate plaque proteins. Binding is propelled by changes in free energy. As previously stated, these complexes connect the extracellular matrix to actin bundles. Cryo-electron tomography reveals that the adhesion contains particles on the cell membrane with diameter of 25 +/- 5 nm and spaced at approximately 45 nm. Treatment with Rho-kinase inhibitor Y-27632 reduces the size of the particle, and it is extremely mechanosensitive. One important function of integrins on cells in tissue culture is their role in cell migration. Cells adhere to a substrate through their integrins. During movement, the cell makes new attachments to the substrate at its front and concurrently releases those at its rear. When released from the substrate, integrin molecules are taken back into the cell by endocytosis; they are transported through the cell to its front by the endocytic cycle, where they are added back to the surface. In this way they are cycled for reuse, enabling the cell to make fresh attachments at its leading front. The cycle of integrin endocytosis and recycling back to the cell surface is important also for not migrating cells and during animal development. Integrins play an important role in cell signaling by modulating the cell signaling pathways of transmembrane protein kinases such as receptor tyrosine kinases (RTK). While the interaction between integrin and receptor tyrosine kinases originally was thought of as uni-directional and supportive, recent studies indicate that integrins have additional, multi-faceted roles in cell signaling. Integrins can regulate the receptor tyrosine kinase signaling by recruiting specific adaptors to the plasma membrane. For example, β1c integrin recruits Gab1/Shp2 and presents Shp2 to IGF1R, resulting in dephosphorylation of the receptor. In a reverse direction, when a receptor tyrosine kinase is activated, integrins co-localise at focal adhesion with the receptor tyrosine kinases and their associated signaling molecules. The repertoire of integrins expressed on a particular cell can specify the signaling pathway due to the differential binding affinity of ECM ligands for the integrins. The tissue stiffness and matrix composition can initiate specific signaling pathways regulating cell behavior. Clustering and activation of the integrins/actin complexes strengthen the focal adhesion interaction and initiate the framework for cell signaling through assembly of adhesomes. Depending on the integrin's regulatory impact on specific receptor tyrosine kinases, the cell can experience: Knowledge of the relationship between integrins and receptor tyrosine kinase has laid a foundation for new approaches to cancer therapy. Specifically, targeting integrins associated with RTKs is an emerging approach for inhibiting angiogenesis. Integrins have an important function in neuroregeneration after injury of the peripheral nervous system (PNS). Integrins are present at the growth cone of damaged PNS neurons and attach to ligands in the ECM to promote axon regeneration. It is unclear whether integrins can promote axon regeneration in the adult central nervous system (CNS). There are two obstacles that prevent integrin-mediated regeneration in the CNS: 1) integrins are not localised in the axon of most adult CNS neurons and 2) integrins become inactivated by molecules in the scar tissue after injury. The following are 16 of the ~24 integrins found in vertebrates: Beta-1 integrins interact with many alpha integrin chains. Gene knockouts of integrins in mice are not always lethal, which suggests that during embryonal development, one integrin may substitute its function for another in order to allow survival. Some integrins are on the cell surface in an inactive state, and can be rapidly primed, or put into a state capable of binding their ligands, by cytokines. Integrins can assume several different well-defined shapes or "conformational states". Once primed, the conformational state changes to stimulate ligand binding, which then activates the receptors — also by inducing a shape change — to trigger outside-in signal transduction. Media related to Integrins at Wikimedia Commons
[ { "paragraph_id": 0, "text": "Integrins are transmembrane receptors that help cell-cell and cell-extracellular matrix (ECM) adhesion. Upon ligand binding, integrins activate signal transduction pathways that mediate cellular signals such as regulation of the cell cycle, organization of the intracellular cytoskeleton, and movement of new receptors to the cell membrane. The presence of integrins allows rapid and flexible responses to events at the cell surface (e.g. signal platelets to initiate an interaction with coagulation factors).", "title": "" }, { "paragraph_id": 1, "text": "Several types of integrins exist, and one cell generally has multiple different types on its surface. Integrins are found in all animals while integrin-like receptors are found in plant cells.", "title": "" }, { "paragraph_id": 2, "text": "Integrins work alongside other proteins such as cadherins, the immunoglobulin superfamily cell adhesion molecules, selectins and syndecans, to mediate cell–cell and cell–matrix interaction. Ligands for integrins include fibronectin, vitronectin, collagen and laminin.", "title": "" }, { "paragraph_id": 3, "text": "Integrins are obligate heterodimers composed of α and β subunits. Several genes code for multiple isoforms of these subunits, which gives rise to an array of unique integrins with varied activity. In mammals, integrins are assembled from eighteen α and eight β subunits, in Drosophila five α and two β subunits, and in Caenorhabditis nematodes two α subunits and one β subunit. The α and β subunits are both class I transmembrane proteins, so each penetrates the plasma membrane once, and can possess several cytoplasmic domains.", "title": "Structure" }, { "paragraph_id": 4, "text": "Variants of some subunits are formed by differential RNA splicing; for example, four variants of the beta-1 subunit exist. Through different combinations of the α and β subunits, 24 unique mammalian integrins are generated, excluding splice- and glycosylation variants.", "title": "Structure" }, { "paragraph_id": 5, "text": "Integrin subunits span the cell membrane and have short cytoplasmic domains of 40–70 amino acids. The exception is the beta-4 subunit, which has a cytoplasmic domain of 1,088 amino acids, one of the largest of any membrane protein. Outside the cell membrane, the α and β chains lie close together along a length of about 23 nm; the final 5 nm N-termini of each chain forms a ligand-binding region for the ECM. They have been compared to lobster claws, although they don't actually \"pinch\" their ligand, they chemically interact with it at the insides of the \"tips\" of their \"pinchers\".", "title": "Structure" }, { "paragraph_id": 6, "text": "The molecular mass of the integrin subunits can vary from 90 kDa to 160 kDa. Beta subunits have four cysteine-rich repeated sequences. Both α and β subunits bind several divalent cations. The role of divalent cations in the α subunit is unknown, but may stabilize the folds of the protein. The cations in the β subunits are more interesting: they are directly involved in coordinating at least some of the ligands that integrins bind.", "title": "Structure" }, { "paragraph_id": 7, "text": "Integrins can be categorized in multiple ways. For example, some α chains have an additional structural element (or \"domain\") inserted toward the N-terminal, the alpha-A domain (so called because it has a similar structure to the A-domains found in the protein von Willebrand factor; it is also termed the α-I domain). Integrins carrying this domain either bind to collagens (e.g. integrins α1 β1, and α2 β1), or act as cell-cell adhesion molecules (integrins of the β2 family). This α-I domain is the binding site for ligands of such integrins. Those integrins that don't carry this inserted domain also have an A-domain in their ligand binding site, but this A-domain is found on the β subunit.", "title": "Structure" }, { "paragraph_id": 8, "text": "In both cases, the A-domains carry up to three divalent cation binding sites. One is permanently occupied in physiological concentrations of divalent cations, and carries either a calcium or magnesium ion, the principal divalent cations in blood at median concentrations of 1.4 mM (calcium) and 0.8 mM (magnesium). The other two sites become occupied by cations when ligands bind—at least for those ligands involving an acidic amino acid in their interaction sites. An acidic amino acid features in the integrin-interaction site of many ECM proteins, for example as part of the amino acid sequence Arginine-Glycine-Aspartic acid (\"RGD\" in the one-letter amino acid code).", "title": "Structure" }, { "paragraph_id": 9, "text": "Despite many years of effort, discovering the high-resolution structure of integrins proved to be challenging, as membrane proteins are classically difficult to purify, and as integrins are large, complex and highly glycosylated with many sugar 'trees' attached to them. Low-resolution images of detergent extracts of intact integrin GPIIbIIIa, obtained using electron microscopy, and even data from indirect techniques that investigate the solution properties of integrins using ultracentrifugation and light scattering, were combined with fragmentary high-resolution crystallographic or NMR data from single or paired domains of single integrin chains, and molecular models postulated for the rest of the chains.", "title": "Structure" }, { "paragraph_id": 10, "text": "The X-ray crystal structure obtained for the complete extracellular region of one integrin, αvβ3, shows the molecule to be folded into an inverted V-shape that potentially brings the ligand-binding sites close to the cell membrane. Perhaps more importantly, the crystal structure was also obtained for the same integrin bound to a small ligand containing the RGD-sequence, the drug cilengitide. As detailed above, this finally revealed why divalent cations (in the A-domains) are critical for RGD-ligand binding to integrins. The interaction of such sequences with integrins is believed to be a primary switch by which ECM exerts its effects on cell behaviour.", "title": "Structure" }, { "paragraph_id": 11, "text": "The structure poses many questions, especially regarding ligand binding and signal transduction. The ligand binding site is directed towards the C-terminal of the integrin, the region where the molecule emerges from the cell membrane. If it emerges orthogonally from the membrane, the ligand binding site would apparently be obstructed, especially as integrin ligands are typically massive and well cross-linked components of the ECM. In fact, little is known about the angle that membrane proteins subtend to the plane of the membrane; this is a problem difficult to address with available technologies. The default assumption is that they emerge rather like little lollipops, but there is little evidence for this. The integrin structure has drawn attention to this problem, which may have general implications for how membrane proteins work. It appears that the integrin transmembrane helices are tilted (see \"Activation\" below), which hints that the extracellular chains may also not be orthogonal with respect to the membrane surface.", "title": "Structure" }, { "paragraph_id": 12, "text": "Although the crystal structure changed surprisingly little after binding to cilengitide, the current hypothesis is that integrin function involves changes in shape to move the ligand-binding site into a more accessible position, away from the cell surface, and this shape change also triggers intracellular signaling. There is a wide body of cell-biological and biochemical literature that supports this view. Perhaps the most convincing evidence involves the use of antibodies that only recognize integrins when they have bound to their ligands, or are activated. As the \"footprint\" that an antibody makes on its binding target is roughly a circle about 3 nm in diameter, the resolution of this technique is low. Nevertheless, these so-called LIBS (Ligand-Induced-Binding-Sites) antibodies unequivocally show that dramatic changes in integrin shape routinely occur. However, how the changes detected with antibodies look on the structure is still unknown.", "title": "Structure" }, { "paragraph_id": 13, "text": "When released into the cell membrane, newly synthesized integrin dimers are speculated to be found in the same \"bent\" conformation revealed by the structural studies described above. One school of thought claims that this bent form prevents them from interacting with their ligands, although bent forms can predominate in high-resolution EM structures of integrin bound to an ECM ligand. Therefore, at least in biochemical experiments, integrin dimers must apparently not be 'unbent' in order to prime them and allow their binding to the ECM. In cells, the priming is accomplished by a protein talin, which binds to the β tail of the integrin dimer and changes its conformation. The α and β integrin chains are both class-I transmembrane proteins: they pass the plasma membrane as single transmembrane alpha-helices. Unfortunately, the helices are too long, and recent studies suggest that, for integrin gpIIbIIIa, they are tilted with respect both to one another and to the plane of the membrane. Talin binding alters the angle of tilt of the β3 chain transmembrane helix in model systems and this may reflect a stage in the process of inside-out signalling which primes integrins. Moreover, talin proteins are able to dimerize and thus are thought to intervene in the clustering of integrin dimers which leads to the formation of a focal adhesion. Recently, the Kindlin-1 and Kindlin-2 proteins have also been found to interact with integrin and activate it.", "title": "Structure" }, { "paragraph_id": 14, "text": "Integrins have two main functions, attachment of the cells to the ECM and signal transduction from the ECM to the cells. They are also involved in a wide range of other biological activities, including extravasation, cell-to-cell adhesion, cell migration, and as receptors for certain viruses, such as adenovirus, echovirus, hantavirus, and foot-and-mouth disease, polio virus and other viruses. Recently, the importance of integrins in the progress of autoimmune disorders is also gaining attention of the scientists. These mechanoreceptors seem to regulate autoimmunity by dictating various intracellular pathways to control immune cell adhesion to endothelial cell layers followed by their trans-migration. This process might or might not be dependent on the sheer force faced by the extracellular parts of different integrins.", "title": "Function" }, { "paragraph_id": 15, "text": "A prominent function of the integrins is seen in the molecule GpIIb/IIIa, an integrin on the surface of blood platelets (thrombocytes) responsible for attachment to fibrin within a developing blood clot. This molecule dramatically increases its binding affinity for fibrin/fibrinogen through association of platelets with exposed collagens in the wound site. Upon association of platelets with collagen, GPIIb/IIIa changes shape, allowing it to bind to fibrin and other blood components to form the clot matrix and stop blood loss.", "title": "Function" }, { "paragraph_id": 16, "text": "Integrins couple the cell-extracellular matrix (ECM) outside a cell to the cytoskeleton (in particular, the microfilaments) inside the cell. Which ligand in the ECM the integrin can bind to is defined by which α and β subunits the integrin is made of. Among the ligands of integrins are fibronectin, vitronectin, collagen, and laminin. The connection between the cell and the ECM may help the cell to endure pulling forces without being ripped out of the ECM. The ability of a cell to create this kind of bond is also of vital importance in ontogeny.", "title": "Function" }, { "paragraph_id": 17, "text": "Cell attachment to the ECM is a basic requirement to build a multicellular organism. Integrins are not simply hooks, but give the cell critical signals about the nature of its surroundings. Together with signals arising from receptors for soluble growth factors like VEGF, EGF, and many others, they enforce a cellular decision on what biological action to take, be it attachment, movement, death, or differentiation. Thus integrins lie at the heart of many cellular biological processes. The attachment of the cell takes place through formation of cell adhesion complexes, which consist of integrins and many cytoplasmic proteins, such as talin, vinculin, paxillin, and alpha-actinin. These act by regulating kinases such as FAK (focal adhesion kinase) and Src kinase family members to phosphorylate substrates such as p130CAS thereby recruiting signaling adaptors such as CRK. These adhesion complexes attach to the actin cytoskeleton. The integrins thus serve to link two networks across the plasma membrane: the extracellular ECM and the intracellular actin filamentous system. Integrin α6β4 is an exception: it links to the keratin intermediate filament system in epithelial cells.", "title": "Function" }, { "paragraph_id": 18, "text": "Focal adhesions are large molecular complexes, which are generated following interaction of integrins with ECM, then their clustering. The clusters likely provide sufficient intracellular binding sites to permit the formation of stable signaling complexes on the cytoplasmic side of the cell membrane. So the focal adhesions contain integrin ligand, integrin molecule, and associate plaque proteins. Binding is propelled by changes in free energy. As previously stated, these complexes connect the extracellular matrix to actin bundles. Cryo-electron tomography reveals that the adhesion contains particles on the cell membrane with diameter of 25 +/- 5 nm and spaced at approximately 45 nm. Treatment with Rho-kinase inhibitor Y-27632 reduces the size of the particle, and it is extremely mechanosensitive.", "title": "Function" }, { "paragraph_id": 19, "text": "One important function of integrins on cells in tissue culture is their role in cell migration. Cells adhere to a substrate through their integrins. During movement, the cell makes new attachments to the substrate at its front and concurrently releases those at its rear. When released from the substrate, integrin molecules are taken back into the cell by endocytosis; they are transported through the cell to its front by the endocytic cycle, where they are added back to the surface. In this way they are cycled for reuse, enabling the cell to make fresh attachments at its leading front. The cycle of integrin endocytosis and recycling back to the cell surface is important also for not migrating cells and during animal development.", "title": "Function" }, { "paragraph_id": 20, "text": "Integrins play an important role in cell signaling by modulating the cell signaling pathways of transmembrane protein kinases such as receptor tyrosine kinases (RTK). While the interaction between integrin and receptor tyrosine kinases originally was thought of as uni-directional and supportive, recent studies indicate that integrins have additional, multi-faceted roles in cell signaling. Integrins can regulate the receptor tyrosine kinase signaling by recruiting specific adaptors to the plasma membrane. For example, β1c integrin recruits Gab1/Shp2 and presents Shp2 to IGF1R, resulting in dephosphorylation of the receptor. In a reverse direction, when a receptor tyrosine kinase is activated, integrins co-localise at focal adhesion with the receptor tyrosine kinases and their associated signaling molecules.", "title": "Function" }, { "paragraph_id": 21, "text": "The repertoire of integrins expressed on a particular cell can specify the signaling pathway due to the differential binding affinity of ECM ligands for the integrins. The tissue stiffness and matrix composition can initiate specific signaling pathways regulating cell behavior. Clustering and activation of the integrins/actin complexes strengthen the focal adhesion interaction and initiate the framework for cell signaling through assembly of adhesomes.", "title": "Function" }, { "paragraph_id": 22, "text": "Depending on the integrin's regulatory impact on specific receptor tyrosine kinases, the cell can experience:", "title": "Function" }, { "paragraph_id": 23, "text": "Knowledge of the relationship between integrins and receptor tyrosine kinase has laid a foundation for new approaches to cancer therapy. Specifically, targeting integrins associated with RTKs is an emerging approach for inhibiting angiogenesis.", "title": "Function" }, { "paragraph_id": 24, "text": "Integrins have an important function in neuroregeneration after injury of the peripheral nervous system (PNS). Integrins are present at the growth cone of damaged PNS neurons and attach to ligands in the ECM to promote axon regeneration. It is unclear whether integrins can promote axon regeneration in the adult central nervous system (CNS). There are two obstacles that prevent integrin-mediated regeneration in the CNS: 1) integrins are not localised in the axon of most adult CNS neurons and 2) integrins become inactivated by molecules in the scar tissue after injury.", "title": "Integrins and nerve repair" }, { "paragraph_id": 25, "text": "The following are 16 of the ~24 integrins found in vertebrates:", "title": "Vertebrate integrins" }, { "paragraph_id": 26, "text": "Beta-1 integrins interact with many alpha integrin chains. Gene knockouts of integrins in mice are not always lethal, which suggests that during embryonal development, one integrin may substitute its function for another in order to allow survival. Some integrins are on the cell surface in an inactive state, and can be rapidly primed, or put into a state capable of binding their ligands, by cytokines. Integrins can assume several different well-defined shapes or \"conformational states\". Once primed, the conformational state changes to stimulate ligand binding, which then activates the receptors — also by inducing a shape change — to trigger outside-in signal transduction.", "title": "Vertebrate integrins" }, { "paragraph_id": 27, "text": "Media related to Integrins at Wikimedia Commons", "title": "External links" } ]
Integrins are transmembrane receptors that help cell-cell and cell-extracellular matrix (ECM) adhesion. Upon ligand binding, integrins activate signal transduction pathways that mediate cellular signals such as regulation of the cell cycle, organization of the intracellular cytoskeleton, and movement of new receptors to the cell membrane. The presence of integrins allows rapid and flexible responses to events at the cell surface. Several types of integrins exist, and one cell generally has multiple different types on its surface. Integrins are found in all animals while integrin-like receptors are found in plant cells. Integrins work alongside other proteins such as cadherins, the immunoglobulin superfamily cell adhesion molecules, selectins and syndecans, to mediate cell–cell and cell–matrix interaction. Ligands for integrins include fibronectin, vitronectin, collagen and laminin.
2001-12-01T10:39:42Z
2023-11-13T00:55:54Z
[ "Template:Clear left", "Template:Cite journal", "Template:Cell adhesion molecules", "Template:Short description", "Template:Nobold", "Template:Growth factor receptor modulators", "Template:Signal transduction", "Template:Cite web", "Template:MeshName", "Template:Gene", "Template:Reflist", "Template:Commons-inline", "Template:Integrins", "Template:Stack", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Integrin
15,303
Ion channel
Ion channels are pore-forming membrane proteins that allow ions to pass through the channel pore. Their functions include establishing a resting membrane potential, shaping action potentials and other electrical signals by gating the flow of ions across the cell membrane, controlling the flow of ions across secretory and epithelial cells, and regulating cell volume. Ion channels are present in the membranes of all cells. Ion channels are one of the two classes of ionophoric proteins, the other being ion transporters. The study of ion channels often involves biophysics, electrophysiology, and pharmacology, while using techniques including voltage clamp, patch clamp, immunohistochemistry, X-ray crystallography, fluoroscopy, and RT-PCR. Their classification as molecules is referred to as channelomics. There are two distinctive features of ion channels that differentiate them from other types of ion transporter proteins: Ion channels are located within the membrane of all excitable cells, and of many intracellular organelles. They are often described as narrow, water-filled tunnels that allow only ions of a certain size and/or charge to pass through. This characteristic is called selective permeability. The archetypal channel pore is just one or two atoms wide at its narrowest point and is selective for specific species of ion, such as sodium or potassium. However, some channels may be permeable to the passage of more than one type of ion, typically sharing a common charge: positive (cations) or negative (anions). Ions often move through the segments of the channel pore in a single file nearly as quickly as the ions move through the free solution. In many ion channels, passage through the pore is governed by a "gate", which may be opened or closed in response to chemical or electrical signals, temperature, or mechanical force. Ion channels are integral membrane proteins, typically formed as assemblies of several individual proteins. Such "multi-subunit" assemblies usually involve a circular arrangement of identical or homologous proteins closely packed around a water-filled pore through the plane of the membrane or lipid bilayer. For most voltage-gated ion channels, the pore-forming subunit(s) are called the α subunit, while the auxiliary subunits are denoted β, γ, and so on. Because channels underlie the nerve impulse and because "transmitter-activated" channels mediate conduction across the synapses, channels are especially prominent components of the nervous system. Indeed, numerous toxins that organisms have evolved for shutting down the nervous systems of predators and prey (e.g., the venoms produced by spiders, scorpions, snakes, fish, bees, sea snails, and others) work by modulating ion channel conductance and/or kinetics. In addition, ion channels are key components in a wide variety of biological processes that involve rapid changes in cells, such as cardiac, skeletal, and smooth muscle contraction, epithelial transport of nutrients and ions, T-cell activation, and pancreatic beta-cell insulin release. In the search for new drugs, ion channels are a frequent target. There are over 300 types of ion channels just in the cells of the inner ear. Ion channels may be classified by the nature of their gating, the species of ions passing through those gates, the number of gates (pores), and localization of proteins. Further heterogeneity of ion channels arises when channels with different constitutive subunits give rise to a specific kind of current. Absence or mutation of one or more of the contributing types of channel subunits can result in loss of function and, potentially, underlie neurologic diseases. Ion channels may be classified by gating, i.e. what opens and closes the channels. For example, voltage-gated ion channels open or close depending on the voltage gradient across the plasma membrane, while ligand-gated ion channels open or close depending on binding of ligands to the channel. Voltage-gated ion channels open and close in response to membrane potential. Also known as ionotropic receptors, this group of channels open in response to specific ligand molecules binding to the extracellular domain of the receptor protein. Ligand binding causes a conformational change in the structure of the channel protein that ultimately leads to the opening of the channel gate and subsequent ion flux across the plasma membrane. Examples of such channels include the cation-permeable nicotinic acetylcholine receptors, ionotropic glutamate-gated receptors, acid sensing ion channels (ASICs), ATP-gated P2X receptors, and the anion-permeable γ-aminobutyric acid-gated GABAA receptor. Ion channels activated by second messengers may also be categorized in this group, although ligands and second messengers are otherwise distinguished from each other. This group of channels opens in response to specific lipid molecules binding to the channel's transmembrane domain typically near the inner leaflet of the plasma membrane. Phosphatidylinositol 4,5-bisphosphate (PIP2) and phosphatidic acid (PA) are the best-characterized lipids to gate these channels. Many of the leak potassium channels are gated by lipids including the inward-rectifier potassium channels and two pore domain potassium channels TREK-1 and TRAAK. KCNQ potassium channel family are gated by PIP2. The voltage activated potassium channel (Kv) is regulated by PA. Its midpoint of activation shifts +50 mV upon PA hydrolysis, near resting membrane potentials. This suggests Kv could be opened by lipid hydrolysis independent of voltage and may qualify this channel as dual lipid and voltage gated channel. Gating also includes activation and inactivation by second messengers from the inside of the cell membrane – rather than from outside the cell, as in the case for ligands. Ion channels are also classified according to their subcellular localization. The plasma membrane accounts for around 2% of the total membrane in the cell, whereas intracellular organelles contain 98% of the cell's membrane. The major intracellular compartments are endoplasmic reticulum, Golgi apparatus, and mitochondria. On the basis of localization, ion channels are classified as: Some ion channels are classified by the duration of their response to stimuli: Channels differ with respect to the ion they let pass (for example, Na, K, Cl), the ways in which they may be regulated, the number of subunits of which they are composed and other aspects of structure. Channels belonging to the largest class, which includes the voltage-gated channels that underlie the nerve impulse, consists of four or sometimes five subunits with six transmembrane helices each. On activation, these helices move about and open the pore. Two of these six helices are separated by a loop that lines the pore and is the primary determinant of ion selectivity and conductance in this channel class and some others. The existence and mechanism for ion selectivity was first postulated in the late 1960s by Bertil Hille and Clay Armstrong. The idea of the ionic selectivity for potassium channels was that the carbonyl oxygens of the protein backbones of the "selectivity filter" (named by Bertil Hille) could efficiently replace the water molecules that normally shield potassium ions, but that sodium ions were smaller and cannot be completely dehydrated to allow such shielding, and therefore could not pass through. This mechanism was finally confirmed when the first structure of an ion channel was elucidated. A bacterial potassium channel KcsA, consisting of just the selectivity filter, "P" loop, and two transmembrane helices was used as a model to study the permeability and the selectivity of ion channels in the Mackinnon lab. The determination of the molecular structure of KcsA by Roderick MacKinnon using X-ray crystallography won a share of the 2003 Nobel Prize in Chemistry. Because of their small size and the difficulty of crystallizing integral membrane proteins for X-ray analysis, it is only very recently that scientists have been able to directly examine what channels "look like." Particularly in cases where the crystallography required removing channels from their membranes with detergent, many researchers regard images that have been obtained as tentative. An example is the long-awaited crystal structure of a voltage-gated potassium channel, which was reported in May 2003. One inevitable ambiguity about these structures relates to the strong evidence that channels change conformation as they operate (they open and close, for example), such that the structure in the crystal could represent any one of these operational states. Most of what researchers have deduced about channel operation so far they have established through electrophysiology, biochemistry, gene sequence comparison and mutagenesis. Channels can have single (CLICs) to multiple transmembrane (K channels, P2X receptors, Na channels) domains which span plasma membrane to form pores. Pore can determine the selectivity of the channel. Gate can be formed either inside or outside the pore region. Chemical substances can modulate the activity of ion channels, for example by blocking or activating them. A variety of ion channel blockers (inorganic and organic molecules) can modulate ion channel activity and conductance. Some commonly used blockers include: Several compounds are known to promote the opening or activation of specific ion channels. These are classified by the channel on which they act: There are a number of disorders which disrupt normal functioning of ion channels and have disastrous consequences for the organism. Genetic and autoimmune disorders of ion channels and their modifiers are known as channelopathies. See Category:Channelopathies for a full list. The fundamental properties of currents mediated by ion channels were analyzed by the British biophysicists Alan Hodgkin and Andrew Huxley as part of their Nobel Prize-winning research on the action potential, published in 1952. They built on the work of other physiologists, such as Cole and Baker's research into voltage-gated membrane pores from 1941. The existence of ion channels was confirmed in the 1970s by Bernard Katz and Ricardo Miledi using noise analysis . It was then shown more directly with an electrical recording technique known as the "patch clamp", which led to a Nobel Prize to Erwin Neher and Bert Sakmann, the technique's inventors. Hundreds if not thousands of researchers continue to pursue a more detailed understanding of how these proteins work. In recent years the development of automated patch clamp devices helped to increase significantly the throughput in ion channel screening. The Nobel Prize in Chemistry for 2003 was awarded to Roderick MacKinnon for his studies on the physico-chemical properties of ion channel structure and function, including x-ray crystallographic structure studies. Roderick MacKinnon commissioned Birth of an Idea, a 5-foot (1.5 m) tall sculpture based on the KcsA potassium channel. The artwork contains a wire object representing the channel's interior with a blown glass object representing the main cavity of the channel structure.
[ { "paragraph_id": 0, "text": "Ion channels are pore-forming membrane proteins that allow ions to pass through the channel pore. Their functions include establishing a resting membrane potential, shaping action potentials and other electrical signals by gating the flow of ions across the cell membrane, controlling the flow of ions across secretory and epithelial cells, and regulating cell volume. Ion channels are present in the membranes of all cells. Ion channels are one of the two classes of ionophoric proteins, the other being ion transporters.", "title": "" }, { "paragraph_id": 1, "text": "The study of ion channels often involves biophysics, electrophysiology, and pharmacology, while using techniques including voltage clamp, patch clamp, immunohistochemistry, X-ray crystallography, fluoroscopy, and RT-PCR. Their classification as molecules is referred to as channelomics.", "title": "" }, { "paragraph_id": 2, "text": "There are two distinctive features of ion channels that differentiate them from other types of ion transporter proteins:", "title": "Basic features" }, { "paragraph_id": 3, "text": "Ion channels are located within the membrane of all excitable cells, and of many intracellular organelles. They are often described as narrow, water-filled tunnels that allow only ions of a certain size and/or charge to pass through. This characteristic is called selective permeability. The archetypal channel pore is just one or two atoms wide at its narrowest point and is selective for specific species of ion, such as sodium or potassium. However, some channels may be permeable to the passage of more than one type of ion, typically sharing a common charge: positive (cations) or negative (anions). Ions often move through the segments of the channel pore in a single file nearly as quickly as the ions move through the free solution. In many ion channels, passage through the pore is governed by a \"gate\", which may be opened or closed in response to chemical or electrical signals, temperature, or mechanical force.", "title": "Basic features" }, { "paragraph_id": 4, "text": "Ion channels are integral membrane proteins, typically formed as assemblies of several individual proteins. Such \"multi-subunit\" assemblies usually involve a circular arrangement of identical or homologous proteins closely packed around a water-filled pore through the plane of the membrane or lipid bilayer. For most voltage-gated ion channels, the pore-forming subunit(s) are called the α subunit, while the auxiliary subunits are denoted β, γ, and so on.", "title": "Basic features" }, { "paragraph_id": 5, "text": "Because channels underlie the nerve impulse and because \"transmitter-activated\" channels mediate conduction across the synapses, channels are especially prominent components of the nervous system. Indeed, numerous toxins that organisms have evolved for shutting down the nervous systems of predators and prey (e.g., the venoms produced by spiders, scorpions, snakes, fish, bees, sea snails, and others) work by modulating ion channel conductance and/or kinetics. In addition, ion channels are key components in a wide variety of biological processes that involve rapid changes in cells, such as cardiac, skeletal, and smooth muscle contraction, epithelial transport of nutrients and ions, T-cell activation, and pancreatic beta-cell insulin release. In the search for new drugs, ion channels are a frequent target.", "title": "Biological role" }, { "paragraph_id": 6, "text": "There are over 300 types of ion channels just in the cells of the inner ear. Ion channels may be classified by the nature of their gating, the species of ions passing through those gates, the number of gates (pores), and localization of proteins.", "title": "Diversity" }, { "paragraph_id": 7, "text": "Further heterogeneity of ion channels arises when channels with different constitutive subunits give rise to a specific kind of current. Absence or mutation of one or more of the contributing types of channel subunits can result in loss of function and, potentially, underlie neurologic diseases.", "title": "Diversity" }, { "paragraph_id": 8, "text": "Ion channels may be classified by gating, i.e. what opens and closes the channels. For example, voltage-gated ion channels open or close depending on the voltage gradient across the plasma membrane, while ligand-gated ion channels open or close depending on binding of ligands to the channel.", "title": "Diversity" }, { "paragraph_id": 9, "text": "Voltage-gated ion channels open and close in response to membrane potential.", "title": "Diversity" }, { "paragraph_id": 10, "text": "Also known as ionotropic receptors, this group of channels open in response to specific ligand molecules binding to the extracellular domain of the receptor protein. Ligand binding causes a conformational change in the structure of the channel protein that ultimately leads to the opening of the channel gate and subsequent ion flux across the plasma membrane. Examples of such channels include the cation-permeable nicotinic acetylcholine receptors, ionotropic glutamate-gated receptors, acid sensing ion channels (ASICs), ATP-gated P2X receptors, and the anion-permeable γ-aminobutyric acid-gated GABAA receptor.", "title": "Diversity" }, { "paragraph_id": 11, "text": "Ion channels activated by second messengers may also be categorized in this group, although ligands and second messengers are otherwise distinguished from each other.", "title": "Diversity" }, { "paragraph_id": 12, "text": "This group of channels opens in response to specific lipid molecules binding to the channel's transmembrane domain typically near the inner leaflet of the plasma membrane. Phosphatidylinositol 4,5-bisphosphate (PIP2) and phosphatidic acid (PA) are the best-characterized lipids to gate these channels. Many of the leak potassium channels are gated by lipids including the inward-rectifier potassium channels and two pore domain potassium channels TREK-1 and TRAAK. KCNQ potassium channel family are gated by PIP2. The voltage activated potassium channel (Kv) is regulated by PA. Its midpoint of activation shifts +50 mV upon PA hydrolysis, near resting membrane potentials. This suggests Kv could be opened by lipid hydrolysis independent of voltage and may qualify this channel as dual lipid and voltage gated channel.", "title": "Diversity" }, { "paragraph_id": 13, "text": "Gating also includes activation and inactivation by second messengers from the inside of the cell membrane – rather than from outside the cell, as in the case for ligands.", "title": "Diversity" }, { "paragraph_id": 14, "text": "Ion channels are also classified according to their subcellular localization. The plasma membrane accounts for around 2% of the total membrane in the cell, whereas intracellular organelles contain 98% of the cell's membrane. The major intracellular compartments are endoplasmic reticulum, Golgi apparatus, and mitochondria. On the basis of localization, ion channels are classified as:", "title": "Diversity" }, { "paragraph_id": 15, "text": "Some ion channels are classified by the duration of their response to stimuli:", "title": "Diversity" }, { "paragraph_id": 16, "text": "Channels differ with respect to the ion they let pass (for example, Na, K, Cl), the ways in which they may be regulated, the number of subunits of which they are composed and other aspects of structure. Channels belonging to the largest class, which includes the voltage-gated channels that underlie the nerve impulse, consists of four or sometimes five subunits with six transmembrane helices each. On activation, these helices move about and open the pore. Two of these six helices are separated by a loop that lines the pore and is the primary determinant of ion selectivity and conductance in this channel class and some others. The existence and mechanism for ion selectivity was first postulated in the late 1960s by Bertil Hille and Clay Armstrong. The idea of the ionic selectivity for potassium channels was that the carbonyl oxygens of the protein backbones of the \"selectivity filter\" (named by Bertil Hille) could efficiently replace the water molecules that normally shield potassium ions, but that sodium ions were smaller and cannot be completely dehydrated to allow such shielding, and therefore could not pass through. This mechanism was finally confirmed when the first structure of an ion channel was elucidated. A bacterial potassium channel KcsA, consisting of just the selectivity filter, \"P\" loop, and two transmembrane helices was used as a model to study the permeability and the selectivity of ion channels in the Mackinnon lab. The determination of the molecular structure of KcsA by Roderick MacKinnon using X-ray crystallography won a share of the 2003 Nobel Prize in Chemistry.", "title": "Detailed structure" }, { "paragraph_id": 17, "text": "Because of their small size and the difficulty of crystallizing integral membrane proteins for X-ray analysis, it is only very recently that scientists have been able to directly examine what channels \"look like.\" Particularly in cases where the crystallography required removing channels from their membranes with detergent, many researchers regard images that have been obtained as tentative. An example is the long-awaited crystal structure of a voltage-gated potassium channel, which was reported in May 2003. One inevitable ambiguity about these structures relates to the strong evidence that channels change conformation as they operate (they open and close, for example), such that the structure in the crystal could represent any one of these operational states. Most of what researchers have deduced about channel operation so far they have established through electrophysiology, biochemistry, gene sequence comparison and mutagenesis.", "title": "Detailed structure" }, { "paragraph_id": 18, "text": "Channels can have single (CLICs) to multiple transmembrane (K channels, P2X receptors, Na channels) domains which span plasma membrane to form pores. Pore can determine the selectivity of the channel. Gate can be formed either inside or outside the pore region.", "title": "Detailed structure" }, { "paragraph_id": 19, "text": "Chemical substances can modulate the activity of ion channels, for example by blocking or activating them.", "title": "Pharmacology" }, { "paragraph_id": 20, "text": "A variety of ion channel blockers (inorganic and organic molecules) can modulate ion channel activity and conductance. Some commonly used blockers include:", "title": "Pharmacology" }, { "paragraph_id": 21, "text": "Several compounds are known to promote the opening or activation of specific ion channels. These are classified by the channel on which they act:", "title": "Pharmacology" }, { "paragraph_id": 22, "text": "There are a number of disorders which disrupt normal functioning of ion channels and have disastrous consequences for the organism. Genetic and autoimmune disorders of ion channels and their modifiers are known as channelopathies. See Category:Channelopathies for a full list.", "title": "Diseases" }, { "paragraph_id": 23, "text": "The fundamental properties of currents mediated by ion channels were analyzed by the British biophysicists Alan Hodgkin and Andrew Huxley as part of their Nobel Prize-winning research on the action potential, published in 1952. They built on the work of other physiologists, such as Cole and Baker's research into voltage-gated membrane pores from 1941. The existence of ion channels was confirmed in the 1970s by Bernard Katz and Ricardo Miledi using noise analysis . It was then shown more directly with an electrical recording technique known as the \"patch clamp\", which led to a Nobel Prize to Erwin Neher and Bert Sakmann, the technique's inventors. Hundreds if not thousands of researchers continue to pursue a more detailed understanding of how these proteins work. In recent years the development of automated patch clamp devices helped to increase significantly the throughput in ion channel screening.", "title": "History" }, { "paragraph_id": 24, "text": "The Nobel Prize in Chemistry for 2003 was awarded to Roderick MacKinnon for his studies on the physico-chemical properties of ion channel structure and function, including x-ray crystallographic structure studies.", "title": "History" }, { "paragraph_id": 25, "text": "Roderick MacKinnon commissioned Birth of an Idea, a 5-foot (1.5 m) tall sculpture based on the KcsA potassium channel. The artwork contains a wire object representing the channel's interior with a blown glass object representing the main cavity of the channel structure.", "title": "Culture" } ]
Ion channels are pore-forming membrane proteins that allow ions to pass through the channel pore. Their functions include establishing a resting membrane potential, shaping action potentials and other electrical signals by gating the flow of ions across the cell membrane, controlling the flow of ions across secretory and epithelial cells, and regulating cell volume. Ion channels are present in the membranes of all cells. Ion channels are one of the two classes of ionophoric proteins, the other being ion transporters. The study of ion channels often involves biophysics, electrophysiology, and pharmacology, while using techniques including voltage clamp, patch clamp, immunohistochemistry, X-ray crystallography, fluoroscopy, and RT-PCR. Their classification as molecules is referred to as channelomics.
2002-02-25T15:51:15Z
2023-12-15T07:00:22Z
[ "Template:TOC limit", "Template:Cite web", "Template:Cite book", "Template:Citation needed", "Template:Convert", "Template:Cite journal", "Template:Main", "Template:Clear", "Template:Cite thesis", "Template:MeSH name", "Template:Ligand-gated ion channels", "Template:Channel blockers", "Template:Authority control", "Template:Short description", "Template:Distinguish", "Template:Reflist", "Template:Wikiversity", "Template:Ion channels" ]
https://en.wikipedia.org/wiki/Ion_channel
15,304
IDE
IDE, iDE, or Ide may refer to:
[ { "paragraph_id": 0, "text": "IDE, iDE, or Ide may refer to:", "title": "" } ]
IDE, iDE, or Ide may refer to:
2023-05-09T10:44:33Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Nihongo", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/IDE
15,305
Integrated development environment
An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop, NetBeans do not. The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development. Integrated development environments are designed to maximize programmer productivity by providing tight-knit components with similar user interfaces. IDEs present a single program in which all development is done. This program typically provides many features for authoring, modifying, compiling, deploying and debugging software. This contrasts with software development using unrelated tools, such as vi, GDB, GNU Compiler Collection, or make. One aim of the IDE is to reduce the configuration necessary to piece together multiple development utilities. Instead, it provides the same set of capabilities as one cohesive unit. Reducing setup time can increase developer productivity, especially in cases where learning to use the IDE is faster than manually integrating and learning all of the individual tools. Tighter integration of all development tasks has the potential to improve overall productivity beyond just helping with setup tasks. For example, code can be continuously parsed while it is being edited, providing instant feedback when syntax errors are introduced, thus allowing developers to debug code much faster and more easily with an IDE. Some IDEs are dedicated to a specific programming language, allowing a feature set that most closely matches the programming paradigms of the language. However, there are many multiple-language IDEs. While most modern IDEs are graphical, text-based IDEs such as Turbo Pascal were in popular use before the availability of windowing systems like Microsoft Windows and the X Window System (X11). They commonly use function keys or hotkeys to execute frequently used commands or macros. IDEs initially became possible when developing via a console or terminal. Early systems could not support one, since programs were prepared using flowcharts, entering programs with punched cards (or paper tape, etc.) before submitting them to a compiler. Dartmouth BASIC was the first language to be created with an IDE (and was also the first to be designed for use while sitting in front of a console or terminal). Its IDE (part of the Dartmouth Time Sharing System) was command-based, and therefore did not look much like the menu-driven, graphical IDEs popular after the advent of the Graphical User Interface. However it integrated editing, file management, compilation, debugging and execution in a manner consistent with a modern IDE. Maestro I is a product from Softlab Munich and was the world's first integrated development environment for software. Maestro I was installed for 22,000 programmers worldwide. Until 1989, 6,000 installations existed in the Federal Republic of Germany. Maestro was arguably the world leader in this field during the 1970s and 1980s. Today one of the last Maestro I can be found in the Museum of Information Technology at Arlington in Texas. One of the first IDEs with a plug-in concept was Softbench. In 1995 Computerwoche commented that the use of an IDE was not well received by developers since it would fence in their creativity. As of August 2023, the most commonly searched for IDEs on Google Search were Visual Studio, Visual Studio Code, and Eclipse. The IDE editor usually provides syntax highlighting, it can show both the structures, the language keywords and the syntax errors with visually distinct colors and font effects. Code completion is an important IDE feature, intended to speed up programming. Modern IDEs even have intelligent code completion. Intelligent code completion is a context-aware code completion feature in some programming environments that speeds up the process of coding applications by reducing typos and other common mistakes. Attempts at this are usually done through auto-completion popups while typing, querying parameters of functions, and query hints related to syntax errors. Intelligent code completion and related tools serve as documentation and disambiguation for variable names, functions, and methods, using static analysis. Advanced IDEs provide support for automated refactoring. An IDE is expected to provide integrated version control, in order to interact with source repositories. IDEs are also used for debugging, using an integrated debugger, with support for setting breakpoints in the editor, visual rendering of steps, etc. IDEs may provide support for code search. Code search has two different meanings. First, it means searching for class and function declarations, usages, variable and field read/write, etc. IDEs can use different kinds of user interface for code search, for example form-based widgets and natural-language based interfaces. Second, it means searching for a concrete implementation of some specified functionality. Visual programming is a usage scenario in which an IDE is generally required. Visual Basic allows users to create new applications by moving programming, building blocks, or code nodes to create flowcharts or structure diagrams that are then compiled or interpreted. These flowcharts often are based on the Unified Modeling Language. This interface has been popularized with the Lego Mindstorms system and is being actively perused by a number of companies wishing to capitalize on the power of custom browsers like those found at Mozilla. KTechlab supports flowcode and is a popular open-source IDE and Simulator for developing software for microcontrollers. Visual programming is also responsible for the power of distributed programming (cf. LabVIEW and EICASLAB software). An early visual programming system, Max, was modeled after analog synthesizer design and has been used to develop real-time music performance software since the 1980s. Another early example was Prograph, a dataflow-based system originally developed for the Macintosh. The graphical programming environment "Grape" is used to program qfix robot kits. This approach is also used in specialist software such as Openlab, where the end-users want the flexibility of a full programming language, without the traditional learning curve associated with one. Some IDEs support multiple languages, such as GNU Emacs based on C and Emacs Lisp; IntelliJ IDEA, Eclipse, MyEclipse or NetBeans, based on Java; MonoDevelop, based on C#; or PlayCode. Support for alternative languages is often provided by plugins, allowing them to be installed on the same IDE at the same time. For example, Flycheck is a modern on-the-fly syntax checking extension for GNU Emacs 24 with support for 39 languages. Another example is JDoodle, an online cloud-based IDE that supports over 76 languages. Eclipse, and Netbeans have plugins for C/C++, Ada, GNAT (for example AdaGIDE), Perl, Python, Ruby, and PHP, which are selected between automatically based on file extension, environment or project settings. Unix programmers can combine command-line POSIX tools into a complete development environment, capable of developing large programs such as the Linux kernel and its environment. In this sense, the entire Unix system functions as an IDE. The free software GNU tools (GNU Compiler Collection (GCC), GNU Debugger (GDB), and GNU make) are available on many platforms, including Windows. The pervasive Unix philosophy of "everything is a text stream" enables developers who favor command-line oriented tools to use editors with support for many of the standard Unix and GNU build tools, building an IDE with programs like Emacs or Vim. Data Display Debugger is intended to be an advanced graphical front-end for many text-based debugger standard tools. Some programmers prefer managing makefiles and their derivatives to the similar code building tools included in a full IDE. For example, most contributors to the PostgreSQL database use make and GDB directly to develop new features. Even when building PostgreSQL for Microsoft Windows using Visual C++, Perl scripts are used as a replacement for make rather than relying on any IDE features. Some Linux IDEs such as Geany attempt to provide a graphical front end to traditional build operations. On the various Microsoft Windows platforms, command-line tools for development are seldom used. Accordingly, there are many commercial and non-commercial products. However, each has a different design commonly creating incompatibilities. Most major compiler vendors for Windows still provide free copies of their command-line tools, including Microsoft (Visual C++, Platform SDK, .NET Framework SDK, nmake utility). IDEs have always been popular on the Apple Macintosh's classic Mac OS and macOS, dating back to Macintosh Programmer's Workshop, Turbo Pascal, THINK Pascal and THINK C environments of the mid-1980s. Currently macOS programmers can choose between native IDEs like Xcode and open-source tools such as Eclipse and Netbeans. ActiveState Komodo is a proprietary multilanguage IDE supported on macOS. A web integrated development environment (Web IDE), also known as an Online IDE or Cloud IDE, is a browser based IDE that allows for software development or web development. A web IDE can be accessed from a web browser allowing for a portable work environment. A web IDE does not usually contain all of the same features as a traditional, or desktop, IDE, although all of the basic IDE features, such as syntax highlighting, are typically present. A Mobile-Based Integrated Development Environment (IDE) is a software application that provides a comprehensive suite of tools for software development on mobile platforms. Unlike traditional desktop IDEs, mobile-based IDEs are designed to run on smartphones and tablets, allowing developers to write, debug, and deploy code directly from their mobile devices.
[ { "paragraph_id": 0, "text": "An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop, NetBeans do not.", "title": "" }, { "paragraph_id": 1, "text": "The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development.", "title": "" }, { "paragraph_id": 2, "text": "Integrated development environments are designed to maximize programmer productivity by providing tight-knit components with similar user interfaces. IDEs present a single program in which all development is done. This program typically provides many features for authoring, modifying, compiling, deploying and debugging software. This contrasts with software development using unrelated tools, such as vi, GDB, GNU Compiler Collection, or make.", "title": "Overview" }, { "paragraph_id": 3, "text": "One aim of the IDE is to reduce the configuration necessary to piece together multiple development utilities. Instead, it provides the same set of capabilities as one cohesive unit. Reducing setup time can increase developer productivity, especially in cases where learning to use the IDE is faster than manually integrating and learning all of the individual tools. Tighter integration of all development tasks has the potential to improve overall productivity beyond just helping with setup tasks. For example, code can be continuously parsed while it is being edited, providing instant feedback when syntax errors are introduced, thus allowing developers to debug code much faster and more easily with an IDE.", "title": "Overview" }, { "paragraph_id": 4, "text": "Some IDEs are dedicated to a specific programming language, allowing a feature set that most closely matches the programming paradigms of the language. However, there are many multiple-language IDEs.", "title": "Overview" }, { "paragraph_id": 5, "text": "While most modern IDEs are graphical, text-based IDEs such as Turbo Pascal were in popular use before the availability of windowing systems like Microsoft Windows and the X Window System (X11). They commonly use function keys or hotkeys to execute frequently used commands or macros.", "title": "Overview" }, { "paragraph_id": 6, "text": "IDEs initially became possible when developing via a console or terminal. Early systems could not support one, since programs were prepared using flowcharts, entering programs with punched cards (or paper tape, etc.) before submitting them to a compiler. Dartmouth BASIC was the first language to be created with an IDE (and was also the first to be designed for use while sitting in front of a console or terminal). Its IDE (part of the Dartmouth Time Sharing System) was command-based, and therefore did not look much like the menu-driven, graphical IDEs popular after the advent of the Graphical User Interface. However it integrated editing, file management, compilation, debugging and execution in a manner consistent with a modern IDE.", "title": "History" }, { "paragraph_id": 7, "text": "Maestro I is a product from Softlab Munich and was the world's first integrated development environment for software. Maestro I was installed for 22,000 programmers worldwide. Until 1989, 6,000 installations existed in the Federal Republic of Germany. Maestro was arguably the world leader in this field during the 1970s and 1980s. Today one of the last Maestro I can be found in the Museum of Information Technology at Arlington in Texas.", "title": "History" }, { "paragraph_id": 8, "text": "One of the first IDEs with a plug-in concept was Softbench. In 1995 Computerwoche commented that the use of an IDE was not well received by developers since it would fence in their creativity.", "title": "History" }, { "paragraph_id": 9, "text": "As of August 2023, the most commonly searched for IDEs on Google Search were Visual Studio, Visual Studio Code, and Eclipse.", "title": "History" }, { "paragraph_id": 10, "text": "The IDE editor usually provides syntax highlighting, it can show both the structures, the language keywords and the syntax errors with visually distinct colors and font effects.", "title": "Topics" }, { "paragraph_id": 11, "text": "Code completion is an important IDE feature, intended to speed up programming. Modern IDEs even have intelligent code completion.", "title": "Topics" }, { "paragraph_id": 12, "text": "Intelligent code completion is a context-aware code completion feature in some programming environments that speeds up the process of coding applications by reducing typos and other common mistakes. Attempts at this are usually done through auto-completion popups while typing, querying parameters of functions, and query hints related to syntax errors. Intelligent code completion and related tools serve as documentation and disambiguation for variable names, functions, and methods, using static analysis.", "title": "Topics" }, { "paragraph_id": 13, "text": "Advanced IDEs provide support for automated refactoring.", "title": "Topics" }, { "paragraph_id": 14, "text": "An IDE is expected to provide integrated version control, in order to interact with source repositories.", "title": "Topics" }, { "paragraph_id": 15, "text": "IDEs are also used for debugging, using an integrated debugger, with support for setting breakpoints in the editor, visual rendering of steps, etc.", "title": "Topics" }, { "paragraph_id": 16, "text": "IDEs may provide support for code search. Code search has two different meanings. First, it means searching for class and function declarations, usages, variable and field read/write, etc. IDEs can use different kinds of user interface for code search, for example form-based widgets and natural-language based interfaces. Second, it means searching for a concrete implementation of some specified functionality.", "title": "Topics" }, { "paragraph_id": 17, "text": "Visual programming is a usage scenario in which an IDE is generally required. Visual Basic allows users to create new applications by moving programming, building blocks, or code nodes to create flowcharts or structure diagrams that are then compiled or interpreted. These flowcharts often are based on the Unified Modeling Language.", "title": "Topics" }, { "paragraph_id": 18, "text": "This interface has been popularized with the Lego Mindstorms system and is being actively perused by a number of companies wishing to capitalize on the power of custom browsers like those found at Mozilla. KTechlab supports flowcode and is a popular open-source IDE and Simulator for developing software for microcontrollers. Visual programming is also responsible for the power of distributed programming (cf. LabVIEW and EICASLAB software). An early visual programming system, Max, was modeled after analog synthesizer design and has been used to develop real-time music performance software since the 1980s. Another early example was Prograph, a dataflow-based system originally developed for the Macintosh. The graphical programming environment \"Grape\" is used to program qfix robot kits.", "title": "Topics" }, { "paragraph_id": 19, "text": "This approach is also used in specialist software such as Openlab, where the end-users want the flexibility of a full programming language, without the traditional learning curve associated with one.", "title": "Topics" }, { "paragraph_id": 20, "text": "Some IDEs support multiple languages, such as GNU Emacs based on C and Emacs Lisp; IntelliJ IDEA, Eclipse, MyEclipse or NetBeans, based on Java; MonoDevelop, based on C#; or PlayCode.", "title": "Topics" }, { "paragraph_id": 21, "text": "Support for alternative languages is often provided by plugins, allowing them to be installed on the same IDE at the same time. For example, Flycheck is a modern on-the-fly syntax checking extension for GNU Emacs 24 with support for 39 languages. Another example is JDoodle, an online cloud-based IDE that supports over 76 languages. Eclipse, and Netbeans have plugins for C/C++, Ada, GNAT (for example AdaGIDE), Perl, Python, Ruby, and PHP, which are selected between automatically based on file extension, environment or project settings.", "title": "Topics" }, { "paragraph_id": 22, "text": "Unix programmers can combine command-line POSIX tools into a complete development environment, capable of developing large programs such as the Linux kernel and its environment. In this sense, the entire Unix system functions as an IDE. The free software GNU tools (GNU Compiler Collection (GCC), GNU Debugger (GDB), and GNU make) are available on many platforms, including Windows. The pervasive Unix philosophy of \"everything is a text stream\" enables developers who favor command-line oriented tools to use editors with support for many of the standard Unix and GNU build tools, building an IDE with programs like Emacs or Vim. Data Display Debugger is intended to be an advanced graphical front-end for many text-based debugger standard tools. Some programmers prefer managing makefiles and their derivatives to the similar code building tools included in a full IDE. For example, most contributors to the PostgreSQL database use make and GDB directly to develop new features. Even when building PostgreSQL for Microsoft Windows using Visual C++, Perl scripts are used as a replacement for make rather than relying on any IDE features. Some Linux IDEs such as Geany attempt to provide a graphical front end to traditional build operations.", "title": "Topics" }, { "paragraph_id": 23, "text": "On the various Microsoft Windows platforms, command-line tools for development are seldom used. Accordingly, there are many commercial and non-commercial products. However, each has a different design commonly creating incompatibilities. Most major compiler vendors for Windows still provide free copies of their command-line tools, including Microsoft (Visual C++, Platform SDK, .NET Framework SDK, nmake utility).", "title": "Topics" }, { "paragraph_id": 24, "text": "IDEs have always been popular on the Apple Macintosh's classic Mac OS and macOS, dating back to Macintosh Programmer's Workshop, Turbo Pascal, THINK Pascal and THINK C environments of the mid-1980s. Currently macOS programmers can choose between native IDEs like Xcode and open-source tools such as Eclipse and Netbeans. ActiveState Komodo is a proprietary multilanguage IDE supported on macOS.", "title": "Topics" }, { "paragraph_id": 25, "text": "A web integrated development environment (Web IDE), also known as an Online IDE or Cloud IDE, is a browser based IDE that allows for software development or web development. A web IDE can be accessed from a web browser allowing for a portable work environment. A web IDE does not usually contain all of the same features as a traditional, or desktop, IDE, although all of the basic IDE features, such as syntax highlighting, are typically present.", "title": "Online" }, { "paragraph_id": 26, "text": "A Mobile-Based Integrated Development Environment (IDE) is a software application that provides a comprehensive suite of tools for software development on mobile platforms. Unlike traditional desktop IDEs, mobile-based IDEs are designed to run on smartphones and tablets, allowing developers to write, debug, and deploy code directly from their mobile devices.", "title": "Online" } ]
An integrated development environment (IDE) is a software application that provides comprehensive facilities for software development. An IDE normally consists of at least a source-code editor, build automation tools, and a debugger. Some IDEs, such as IntelliJ IDEA, Eclipse and Lazarus contain the necessary compiler, interpreter or both; others, such as SharpDevelop, NetBeans do not. The boundary between an IDE and other parts of the broader software development environment is not well-defined; sometimes a version control system or various tools to simplify the construction of a graphical user interface (GUI) are integrated. Many modern IDEs also have a class browser, an object browser, and a class hierarchy diagram for use in object-oriented software development.
2001-12-02T00:39:16Z
2023-12-23T06:45:03Z
[ "Template:Citation needed", "Template:As of", "Template:Commons category", "Template:Portal", "Template:Short description", "Template:Cite web", "Template:Computer science", "Template:Software development process", "Template:Main", "Template:Excerpt", "Template:Reflist", "Template:Integrated development environments", "Template:Use dmy dates", "Template:Columns-list", "Template:Cite journal", "Template:ISBN" ]
https://en.wikipedia.org/wiki/Integrated_development_environment
15,308
Ian McKellen
Sir Ian Murray McKellen CH CBE (born 25 May 1939) is an English actor. With a career spanning more than six decades, he is noted for his roles on the screen and stage in genres ranging from Shakespearean dramas and modern theatre to popular fantasy and science fiction. He is regarded as a British cultural icon and was knighted by Queen Elizabeth II in 1991. He has received numerous accolades, including a Tony Award, six Olivier Awards, and a Golden Globe Award as well as nominations for two Academy Awards, five BAFTA Awards and five Emmy Awards. McKellen made his stage debut in 1961 at the Belgrade Theatre as a member of its repertory company, and in 1965 made his first West End appearance. In 1969, he was invited to join the Prospect Theatre Company to play the lead parts in Shakespeare's Richard II and Marlowe's Edward II. In the 1970s McKellen became a stalwart of the Royal Shakespeare Company and the National Theatre of Great Britain. He has earned five Olivier Awards for his roles in Pillars of the Community (1977), The Alchemist (1978), Bent (1979), Wild Honey (1984), and Richard III (1995). McKellen made his Broadway debut in The Promise (1965). He went on to receive the Tony Award for Best Actor in a Play for his role as Antonio Salieri in Amadeus (1980). He was further nominated for Ian McKellen: Acting Shakespeare (1984). He returned to Broadway in Wild Honey (1986), Dance of Death (1990), No Man's Land (2013), and Waiting for Godot (2013), the latter being a joint production with Patrick Stewart. McKellen achieved worldwide fame for his film roles, including the titular King in Richard III (1995), James Whale in Gods and Monsters (1998), Magneto in the X-Men films, and Gandalf in The Lord of the Rings (2001–2003) and The Hobbit (2012–2014) trilogies. Other notable film roles include A Touch of Love (1969), Plenty (1985), Six Degrees of Separation (1993), Restoration (1995), Mr. Holmes (2015), and The Good Liar (2019). McKellen came out as gay in 1988, and has since championed LGBT social movements worldwide. He was awarded the Freedom of the City of London in October 2014. McKellen is a co-founder of Stonewall, an LGBT rights lobby group in the United Kingdom, named after the Stonewall riots. He is also patron of LGBT History Month, Pride London, Oxford Pride, GAY-GLOS, LGBT Foundation and FFLAG. McKellen was born on 25 May 1939 in Burnley, Lancashire, the son of Margery Lois (née Sutcliffe) and Denis Murray McKellen. He was their second child, with a sister, Jean, five years his senior. Shortly before the outbreak of the Second World War in September 1939, his family moved to Wigan. They lived there until Ian was twelve years old, before relocating to Bolton in 1951 after his father had been promoted. The experience of living through the war as a young child had a lasting impact on him, and he later said that "only after peace resumed ... did I realise that war wasn't normal". When an interviewer remarked that he seemed quite calm in the aftermath of the 11 September attacks, McKellen said: "Well, darling, you forget—I slept under a steel plate until I was four years old". McKellen's father was a civil engineer and lay preacher, and was of Protestant Irish and Scottish descent. Both of McKellen's grandfathers were preachers, and his great-great-grandfather, James McKellen, was a "strict, evangelical Protestant minister" in Ballymena, County Antrim. His home environment was strongly Christian, but non-orthodox. "My upbringing was of low nonconformist Christians who felt that you led the Christian life in part by behaving in a Christian manner to everybody you met". When he was 12, his mother died of breast cancer; his father died when he was 25. After his coming out as gay to his stepmother, Gladys McKellen, who was a Quaker, he said, "Not only was she not fazed, but as a member of a society which declared its indifference to people's sexuality years back, I think she was just glad for my sake that I wasn't lying any more". His great-great-grandfather Robert J. Lowes was an activist and campaigner in the ultimately successful campaign for a Saturday half-holiday in Manchester, the forerunner to the modern five-day work week, thus making Lowes a "grandfather of the modern weekend". McKellen attended Bolton School (Boys' Division), of which he is still a supporter, attending regularly to talk to pupils. McKellen's acting career started at Bolton Little Theatre, of which he is now the patron. An early fascination with the theatre was encouraged by his parents, who took him on a family outing to Peter Pan at the Opera House in Manchester when he was three. When he was nine, his main Christmas present was a fold-away wood and bakelite Victorian theatre from Pollocks Toy Theatres, with cardboard scenery and wires to push on the cut-outs of Cinderella and of Laurence Olivier's reenactment of Shakespeare's "Hamlet". His sister took him to his first Shakespeare play, Twelfth Night, by the amateurs of Wigan's Little Theatre, shortly followed by their Macbeth and Wigan High School for Girls' production of A Midsummer Night's Dream, with music by Mendelssohn, with the role of Bottom played by Jean McKellen, who continued to act, direct, and produce amateur theatre until her death. In 1958, McKellen, at the age of 18, won a scholarship to St Catharine's College, Cambridge, where he read English literature. He has since been made an Honorary Fellow of the college. While at Cambridge, McKellen was a member of the Marlowe Society, where he appeared in 23 plays over the course of 3 years. At that young age he was already giving performances that have since become legendary such as his Justice Shallow in Henry IV alongside Trevor Nunn and Derek Jacobi (March 1959), Cymbeline (as Posthumus, opposite Margaret Drabble as Imogen) and Doctor Faustus. During this period McKellen had already been directed by Peter Hall, John Barton and Dadie Rylands, all of whom would have a significant impact on McKellen's future career. McKellen made his first professional appearance in 1961 at the Belgrade Theatre in Coventry, as Roper in A Man for All Seasons, although an audio recording of the Marlowe Society's Cymbeline had gone on commercial sale as part of the Argo Shakespeare series. After four years in regional repertory theatres, McKellen made his first West End appearance, in A Scent of Flowers, regarded as a "notable success". In 1965 he was a member of Laurence Olivier's National Theatre Company at the Old Vic, which led to roles at the Chichester Festival. With the Prospect Theatre Company, McKellen made his breakthrough performances of Shakespeare's Richard II (directed by Richard Cottrell) and Christopher Marlowe's Edward II (directed by Toby Robertson) at the Edinburgh Festival in 1969, the latter causing a storm of protest over the enactment of the homosexual Edward's lurid death. One of McKellen's first major roles on television was as the title character in the BBC's 1966 adaptation of David Copperfield, which achieved 12 million viewers on its initial airings. After some rebroadcasting in the late 60s, the master videotapes for the serial were wiped, and only four scattered episodes (3, 8, 9 and 11) survive as telerecordings, three of which feature McKellen as adult David. McKellen had taken film roles throughout his career—beginning in 1969 with his role of George Matthews in A Touch of Love, and his first leading role was in 1980 as D. H. Lawrence in Priest of Love, but it was not until the 1990s that he became more widely recognised in this medium after several roles in blockbuster Hollywood films. In 1969, McKellen starred in three films, Michael Hayes's The Promise, Clive Donner's epic film Alfred the Great, and Waris Hussein's A Touch of Love (1969). In the 1970s, McKellen became a well-known figure in British theatre, performing frequently at the Royal Shakespeare Company and the Royal National Theatre, where he played several leading Shakespearean roles. From 1973 to 1974, McKellen toured the United Kingdom and Brooklyn Academy of Music portraying Lady Wishfort's Footman, Kruschov, and Edgar in the William Congreve comedy The Way of the World, Anton Chekov's comedic three-act play The Wood Demon and William Shakespeare tragedy King Lear. The following year, he starred in Shakespeare's King John, George Colman's The Clandestine Marriage, and George Bernard Shaw's Too True to Be Good. From 1976 to 1977 he portrayed Romeo in the Shakespeare romance Romeo & Juliet at the Royal Shakespeare Theatre. The following year he played King Leontes in The Winter's Tale. In 1976, McKellen played the title role in William Shakespeare's Macbeth at Stratford in a "gripping ... out of the ordinary" production, with Judi Dench, and Iago in Othello, in award-winning productions directed by Trevor Nunn. Both of these productions were adapted into television films, also directed by Nunn. From 1978 to 1979 he toured in a double feature production of Shakespeare's Twelfth Night, and Anton Chekov's Three Sisters portraying Sir Toby Belch and Andrei, respectively. In 1979, McKellen gained acclaim for his role as Antonio Salieri in the Broadway transfer production of Peter Shaffer's play Amadeus. It was an immensely popular play produced by the National Theatre originally starring Paul Scofield. The transfer starred McKellen, Tim Curry as Wolfgang Amadeus Mozart, and Jane Seymour as Constanze Mozart. The New York Times theatre critic Frank Rich wrote of McKellen's performance "In Mr. McKellen's superb performance, Salieri's descent into madness was portrayed in dark notes of almost bone-rattling terror". For his performance, McKellen received the Tony Award for Best Actor in a Play. In 1981, McKellen portrayed writer and poet D. H. Lawrence in the Christopher Miles directed biographical film, Priest of Love. He followed up with Michael Mann's horror film The Keep (1983). In 1985, he starred in Plenty, the film adaptation of the David Hare play of the same name. The film was directed by Fred Schepisi and starred Meryl Streep, Charles Dance, John Gielgud, and Sting. The film spans nearly 20 years from the early 1940s to the 1960s, around an Englishwoman's experiences as a fighter for the French Resistance during World War II when she has a one-night stand with a British intelligence agent. The film received mixed reviews with Roger Ebert of The Chicago Sun-Times praising the film's ensemble cast writing, "The performances in the movie supply one brilliant solo after another; most of the big moments come as characters dominate the scenes they are in". In 1986, he returned to Broadway in the revival of Anton Chekhov's first play Wild Honey alongside Kim Cattrall and Kate Burton. The play concerned a local Russian schoolteacher who struggles to remain faithful to his wife, despite the attention of three other women. McKellen received mixed reviews from critics in particular Frank Rich of The New York Times who praised him for his "bravura and athletically graceful technique that provides everything except, perhaps, the thing that matters most—sustained laughter". He later wrote, "Mr. McKellen finds himself in the peculiar predicament of the star who strains to carry a frail supporting cast". In 1989 he played Iago in production of Othello by the Royal Shakespeare Company. McKellen starred in the British drama Scandal (1989) a fictionalised account of the Profumo affair that rocked the government of British prime minister Harold Macmillan. McKellen portrayed John Profumo. The film starred Joanne Whalley, and John Hurt. The film premiered at the 1989 Cannes Film Festival and competed for the Palme d'Or. When his friend and colleague, Patrick Stewart, decided to accept the role of Captain Jean-Luc Picard in the American television series, Star Trek: The Next Generation, McKellan strongly advised him not to throw away his respected theatrical career to work in television. However, McKellan later conceded that Stewart had been prudent in accepting the role, which made him a global star and later followed his example such as co-starring with Stewart in the X-Men superhero film series. From 1990 to 1992, he acted in a world tour of a lauded revival of Richard III, playing the title character. The production played at the Brooklyn Academy of Music for two weeks before continuing its tour where Frank Rich of New York Times was able to review it. In his piece, he praised McKellen's performance writing, "Mr McKellen's highly sophisticated sense of theatre and fun drives him to reveal the secrets of how he pulls his victims' strings whether he is addressing the audience in a soliloquy or not". For his performance he received the Laurence Olivier Award for Best Actor. In 1992, he acted in Pam Gems's revival of Chekov's Uncle Vanya at the Royal National Theatre alongside Antony Sher, and Janet McTeer. In 1993, he starred in the film Six Degrees of Separation based on the Pulitzer Prize and Tony Award nominated play of the same name. McKellen starred alongside Will Smith, Donald Sutherland and Stockard Channing. The film was a critical success. That same year, he also appeared in the western The Ballad of Little Jo opposite Bob Hoskins and the action comedy Last Action Hero starring Arnold Schwarzenegger. The following year, he appeared in the superhero film The Shadow with Alec Baldwin and the James L. Brooks directed comedy I'll Do Anything starring Nick Nolte. In 1995, McKellen made his screenwriting debut with Richard III, an ambitious adaptation of William Shakespeare's play of the same name, directed by Richard Loncraine. The film reimagines the play's story and characters to a setting based on 1930s Britain, with Richard depicted as a fascist plotting to usurp the throne. McKellen stars in the title role alongside an ensemble cast including Annette Bening, Robert Downey Jr., Jim Broadbent, Kristen Scott Thomas, Nigel Hawthorne and Dame Maggie Smith. As executive producer he returned his £50,000 fee to complete the filming of the final battle. In his review of the film, The Washington Post film critic Hal Hinson called McKellen's performance a "lethally flamboyant incarnation" and said his "florid mastery ... dominates everything". Film critic Roger Ebert of the Chicago Sun-Times praised McKellen's adaptation and his performance in his four star review writing, "McKellen has a deep sympathy for the playwright ... Here he brings to Shakespeare's most tortured villain a malevolence we are moved to pity. No man should be so evil, and know it. Hitler and others were more evil, but denied out to themselves. There is no escape for Richard. He is one of the first self-aware characters in the theatre, and for that distinction he must pay the price". His performance in the title role garnered BAFTA and Golden Globe nominations for Best Actor and won the European Film Award for Best Actor. His screenplay was nominated for the BAFTA Award for Best Adapted Screenplay. That same year, he appeared in the historical drama Restoration (1995) also starring Downey Jr., as well as Meg Ryan, Hugh Grant, and David Thewlis. He also appeared in the British romantic comedy Jack and Sarah (1995) starring Richard E. Grant, Samantha Mathis, and Judi Dench. In 1993, he appeared in minor roles in the television miniseries Tales of the City, based on the novel by his friend Armistead Maupin. Later that year, McKellen appeared in the HBO television film And the Band Played On based on the acclaimed novel of the same name about the discovery of HIV. For his performance as gay rights activist Bill Kraus, McKellen received the CableACE Award for Supporting Actor in a Movie or Miniseries and was nominated for the Primetime Emmy Award for Outstanding Supporting Actor in a Miniseries or a Movie. From 1993 to 1997 McKellen toured in a one-man show entitled, A Knights Out, about coming out as a gay man. Laurie Winer from The Los Angeles Times wrote, "Even if he is preaching to the converted, McKellen makes us aware of the vast and powerful intolerance outside the comfortable walls of the theatre. Endowed with a rare technique, he is a natural storyteller, an admirable human being and a hands-on activist". From 1997 to 1998, he starred as Dr. Tomas Stockmann in a revival of Henrik Ibsen's An Enemy of the People. Later that year he played Garry Essendine in the Noël Coward comedy Present Laughter at the West Yorkshire Playhouse. In 1998, he appeared in the modestly acclaimed psychological thriller Apt Pupil, which was directed by Bryan Singer and based on a story by Stephen King. McKellen portrayed a fugitive Nazi officer living under a false name in the US who is befriended by a curious teenager (Brad Renfro) who threatens to expose him unless he tells his story in detail. That same year, he played James Whale, the director of Frankenstein in the Bill Condon directed period drama Gods and Monsters, a role for which he was subsequently nominated for the Academy Award for Best Actor, losing it to Roberto Benigni in Life is Beautiful (1998). In 1995, he appeared in the BBC television comedy film Cold Comfort Farm starring Kate Beckinsale, Rufus Sewell, and Stephen Fry. The following year he starred as Tsar Nicholas II in the HBO made-for-television movie Rasputin: Dark Servant of Destiny (1996) starring Alan Rickman as Rasputin. For his performance, McKellen earned a Primetime Emmy Award for Outstanding Supporting Actor in a Limited Series or Movie nomination and received a Golden Globe Award for Best Supporting Actor – Series, Miniseries or Television Film win. McKellen appeared as Mr Creakle in the BBC series David Copperfield (1999) based on the Charles Dickens classic novel. The miniseries starred a pre-Harry Potter Daniel Radcliffe, Bob Hoskins, and Dame Maggie Smith. In 1999, McKellen was cast, again under the direction of Bryan Singer, to play the comic book supervillain Magneto in the 2000 film X-Men and its sequels X2: X-Men United (2003) and X-Men: The Last Stand (2006). He later reprised his role of Magneto in 2014's X-Men: Days of Future Past, sharing the role with Michael Fassbender, who played a younger version of the character in 2011's X-Men: First Class. While filming the first X-Men film in 1999, McKellen was cast as the wizard Gandalf in Peter Jackson's film trilogy adaptation of The Lord of the Rings (consisting of The Fellowship of the Ring, The Two Towers, and The Return of the King), released between 2001 and 2003. He won the Screen Actors Guild Award for Best Supporting Actor in a Motion Picture for his work in The Fellowship of the Ring and was nominated for the Academy Award for Best Supporting Actor for the same role. He provided the voice of Gandalf for several video game adaptations of the Lord of the Rings films. McKellen returned to the Broadway stage in 2001 in an August Strindberg play The Dance of Death alongside Helen Mirren, and David Strathairn at the Broadhurst Theatre. The New York Times Theatre critic Ben Brantley praised McKellen's performance writing, "[McKellen] returns to Broadway to serve up an Elysian concoction we get to sample too little these days: a mixture of heroic stage presence, actorly intelligence, and rarefied theatrical technique". McKellen toured with the production at the Lyric Theatre in London's West End and to the Sydney Art's Festival in Australia. On 16 March 2002, he hosted Saturday Night Live. In 2003, McKellen made a guest appearance as himself on the American cartoon show The Simpsons in a special British-themed episode entitled "The Regina Monologues", along with the then UK Prime Minister Tony Blair and author J. K. Rowling. In April and May 2005, he played the role of Mel Hutchwright in Granada Television's long-running British soap opera, Coronation Street, fulfilling a lifelong ambition. He narrated Richard Bell's film Eighteen as a grandfather who leaves his World War II memoirs on audio-cassette for his teenage grandson. McKellen has appeared in limited release films, such as Emile (which was shot in three weeks following the X2 shoot), Neverwas and Asylum. In 2006, he appeared as Sir Leigh Teabing in The Da Vinci Code opposite Tom Hanks as Robert Langdon. During a 17 May 2006 interview on The Today Show with the Da Vinci Code cast and director Ron Howard, Matt Lauer posed a question to the group about how they would have felt if the film had borne a prominent disclaimer that it is a work of fiction, as some religious groups wanted. McKellen responded, "I've often thought the Bible should have a disclaimer in the front saying 'This is fiction'. I mean, walking on water? It takes ... an act of faith. And I have faith in this movie—not that it's true, not that it's factual, but that it's a jolly good story". He continued, "And I think audiences are clever enough and bright enough to separate out fact and fiction, and discuss the thing when they've seen it". McKellen appeared in the 2006 BBC series of Ricky Gervais's comedy series Extras, where he played himself directing Gervais's character Andy Millman in a play about gay lovers. McKellen received a 2007 Primetime Emmy Award for Outstanding Guest Actor – Comedy Series nomination for his performance. In 2007, McKellen narrated the romantic fantasy adventure film Stardust starring Charlie Cox and Claire Danes, which was a critical and financial success. That same year, he lent his voice to the armored bear Iorek Byrnison in the Chris Weitz-directed fantasy film The Golden Compass based on the acclaimed Philip Pullman novel Northern Lights and starred Nicole Kidman and Daniel Craig. The film received mixed reviews but was a financial success. In 2007, he returned to the Royal Shakespeare Company, in productions of King Lear and The Seagull, both directed by Trevor Nunn. In 2009 he portrayed Number Two in The Prisoner, a remake of the 1967 cult series The Prisoner. In 2009, he appeared in a very popular revival of Waiting for Godot at London's Haymarket Theatre, directed by Sean Mathias, and playing opposite Patrick Stewart. From 2013 to 2014, McKellen and Stewart starred in a double production of Samuel Beckett's Waiting for Godot and Harold Pinter's No Man's Land on Broadway at the Cort Theatre. Variety theatre critic Marilyn Stasio praised the dual production writing, "McKellen and Stewart find plenty of consoling comedy in two masterpieces of existential despair". In both productions of Stasio claims, "the two thespians play the parts they were meant to play". He is Patron of English Touring Theatre and also President and Patron of the Little Theatre Guild of Great Britain, an association of amateur theatre organisations throughout the UK. In late August 2012, he took part in the opening ceremony of the London Paralympics, portraying Prospero from The Tempest. McKellen reprised the role of Gandalf on screen in Peter Jackson's three-part film adaptation of The Hobbit starting with The Hobbit: An Unexpected Journey (2012), followed by The Hobbit: The Desolation of Smaug (2013), and finally The Hobbit: The Battle of the Five Armies (2014). Despite the series receiving mixed reviews, it emerged as a financial success. McKellen also reprised his role as Erik Lehnsherr/Magneto in James Mangold's The Wolverine (2013), and Singer's X-Men: Days of Future Past (2014). In November 2013, McKellen appeared in the Doctor Who 50th anniversary comedy homage The Five(ish) Doctors Reboot. From 2013 to 2016, McKellen co-starred in the ITV sitcom Vicious as Freddie Thornhill, alongside Derek Jacobi. The series revolves around an elderly gay couple who have been together for 50 years. The show's original title was "Vicious Old Queens". There are ongoing jokes about McKellen's career as a relatively unsuccessful character actor who owns a tux because he stole it after doing a guest spot on "Downton Abbey" and that he holds the title of "10th Most Popular ‘Doctor Who’ Villain". Liz Shannon Miller of IndieWire noted while the concept seemed, "weird as hell", that "Once you come to accept McKellen and Jacobi in a multi-camera format, there is a lot to respect about their performances; specifically, the way that those decades of classical training adapt themselves to the sitcom world. Much has been written before about how the tradition of the multi-cam, filmed in front of a studio audience, relates to theatre, and McKellen and Jacobi know how to play to a live crowd". In 2015, McKellen reunited with director Bill Condon playing an elderly Sherlock Holmes in the mystery film Mr. Holmes alongside Laura Linney. In the film based on the novel A Slight Trick of the Mind (2005), Holmes now 93, struggles to recall the details of his final case because his mind is slowly deteriorating. The film premiered at the 65th Berlin International Film Festival with McKellen receiving acclaim for his performance. Rolling Stone film critic Peter Travers praised his performance writing, "Don't think you can take another Hollywood version of Sherlock Holmes? Snap out of it. Apologies to Robert Downey Jr. and Benedict Cumberbatch, but what Ian McKellen does with Arthur Conan Doyle's fictional detective in Mr Holmes is nothing short of magnificent ... Director Bill Condon, who teamed superbly with McKellen on the Oscar-winning Gods and Monsters, brings us a riveting character study of a lion not going gentle into winter". In October 2015, McKellen appeared as Norman to Anthony Hopkins's Sir in a BBC Two production of Ronald Harwood's The Dresser, alongside Edward Fox, Vanessa Kirby, and Emily Watson. Television critic Tim Goodman of The Hollywood Reporter praised the film and the central performances writing, "there's no escaping that Hopkins and McKellen are the central figures here, giving wonderfully nuanced performances, onscreen together for their first time in their acclaimed careers". For his performance McKellen received a British Academy Television Award nomination for his performance. In 2017, McKellen portrayed in a supporting role as Cogsworth (originally voiced by David Ogden Stiers in the 1991 animated film) in the live-action adaptation of Disney's Beauty and the Beast, directed by Bill Condon (which marked the third collaboration between Condon and McKellen, after Gods and Monsters and Mr. Holmes) and co-starred alongside Emma Watson and Dan Stevens. The film was released to positive reviews and grossed $1.2 billion worldwide, making it the highest-grossing live-action musical film, the second highest-grossing film of 2017, and the 17th highest-grossing film of all time. In 2017, McKellen appeared in the documentary McKellen: Playing the Part, directed by director Joe Stephenson. The documentary explores McKellen's life and career as an actor. In October 2017, McKellen played King Lear at the Chichester Festival Theatre, a role which he said was likely to be his "last big Shakespearean part". He performed the play at the Duke of York's Theatre in London's West End during the summer of 2018. To celebrate his 80th birthday, in 2019 McKellen performed in a one-man stage show titled Ian McKellen on Stage: With Tolkien, Shakespeare, Others and YOU celebrating the various performances throughout his career. The show toured across the UK and Ireland (raising money for each venue and organisation's charity) before a West End run at the Harold Pinter Theatre and was performed for one night only on Broadway at the Hudson Theatre. The following year, he appeared in Kenneth Branagh's historical drama All is True (2018) portraying Henry Wriothesley, 3rd Earl of Southampton, opposite Branagh and Judi Dench. In 2019, he reunited with Condon for a fourth time in the mystery thriller The Good Liar opposite Helen Mirren, who received praise for their onscreen chemistry. That same year, he appeared as Gus the Theatre Cat in the movie musical adaptation of Cats directed by Tom Hooper. The film featured performances from Jennifer Hudson, James Corden, Rebel Wilson, Idris Elba, and Judi Dench. The film was widely panned for its poor visual effects, editing, performances, screenplay, and was a box office disaster. In 2021, he played the title role in an age-blind production of Hamlet (having previously played the part in a UK and European tour in 1971), followed by the role of Firs in Chekov's The Cherry Orchard at the Theatre Royal, Windsor. Since November 2021, McKellen and ABBA member Björn Ulvaeus have posted Instagram videos featuring the pair knitting Christmas jumpers and other festive attire. In 2023, it was revealed that Ulvaeus and McKellen would be knitting stagewear for Kylie Minogue as part of her More Than Just a Residency concert residency at Voltaire at The Venetian Las Vegas. In 2023 he is set to star in period thriller The Critic directed by Anand Tucker. The film is written by Patrick Marber adapted off the 2015 novel Curtain Call by Anthony Quinn. The film will premiere at the 2023 Toronto International Film Festival. McKellen and his first partner, Brian Taylor, a history teacher from Bolton, began their relationship in 1964. Their relationship lasted for eight years, ending in 1972. They lived in Earls Terrace, Kensington, London, where McKellen continued to pursue his career as an actor. In 1978 he met his second partner, Sean Mathias, at the Edinburgh Festival. This relationship lasted until 1988, and according to Mathias, it was tempestuous, with conflicts over McKellen's success in acting versus Mathias's somewhat less-successful career. The two remained friends, with Mathias later directing McKellen in Waiting for Godot at the Theatre Royal Haymarket in 2009. The pair entered into a business partnership with Evgeny Lebedev, purchasing the lease of The Grapes public house in Narrow Street. As of 2005, McKellen had been living in Narrow Street, Limehouse, for more than 25 years, more than a decade of which had been spent in a five-storey Victorian conversion. McKellen is an atheist. In the late 1980s, he lost his appetite for every kind of meat but fish, and has since followed a mainly pescetarian diet. In 2001, Ian McKellen received the Artist Citizen of the World Award (France). McKellen has a tattoo of the Elvish number nine, written using J. R. R. Tolkien's constructed script of Tengwar, on his shoulder in reference to his involvement in the Lord of the Rings and the fact that his character was one of the original nine companions of the Fellowship of the Ring. All but one of the other actors of "The Fellowship" (Elijah Wood, Sean Astin, Orlando Bloom, Billy Boyd, Sean Bean, Dominic Monaghan and Viggo Mortensen) have the same tattoo (John Rhys-Davies did not get the tattoo, but his stunt double Brett Beattie did). McKellen was diagnosed with prostate cancer in 2006. In 2012, he stated on his blog that "There is no cause for alarm. I am examined regularly and the cancer is contained. I've not needed any treatment". McKellen became an ordained minister of the Universal Life Church in early 2013 in order to preside over the marriage of his friend and X-Men co-star Patrick Stewart to the singer Sunny Ozell. McKellen was awarded an honorary Doctorate of Letters by Cambridge University on 18 June 2014. He was made a Freeman of the City of London on Thursday 30 October 2014. The ceremony took place at Guildhall in London. He was nominated by London's Lord Mayor Fiona Woolf, who said he was an "exceptional actor" and "tireless campaigner for equality". He is also an Emeritus Fellow of St Catherine's College, Oxford. While McKellen had made his sexual orientation known to fellow actors early on in his stage career, it was not until 1988 that he came out to the general public while appearing on the BBC Radio programme Third Ear hosted by conservative journalist Peregrine Worsthorne. The context that prompted McKellen's decision, overriding any concerns about a possible negative effect on his career, was that the controversial Section 28 of the Local Government Bill, known simply as Section 28, was then under consideration in the British Parliament. Section 28 proposed prohibiting local authorities from promoting homosexuality "... as a kind of pretended family relationship". McKellen has stated that he was influenced in his decision by the advice and support of his friends, among them noted gay author Armistead Maupin. In a 1998 interview that discusses the 29th anniversary of the Stonewall riots, McKellen commented, I have many regrets about not having come out earlier, but one of them might be that I didn't engage myself in the politicking. He has said of this period: My own participating in that campaign was a focus for people [to] take comfort that if Ian McKellen was on board for this, perhaps it would be all right for other people to be as well, gay and straight. Section 28 was, however, enacted and remained on the statute books until 2000 in Scotland and 2003 in England and Wales. Section 28 never applied in Northern Ireland. In 2003, during an appearance on Have I Got News For You, McKellen claimed when he visited Michael Howard, then Environment Secretary (responsible for local government), in 1988 to lobby against Section 28, Howard refused to change his position but did ask him to leave an autograph for his children. McKellen agreed, but wrote, "Fuck off, I'm gay". McKellen described Howard's junior ministers, Conservatives David Wilshire and Jill Knight, who were the architects of Section 28, as the 'ugly sisters' of a political pantomime. McKellen has continued to be very active in LGBT rights efforts. In a statement on his website regarding his activism, the actor commented: I have been reluctant to lobby on other issues I most care about—nuclear weapons (against), religion (atheist), capital punishment (anti), AIDS (fund-raiser) because I never want to be forever spouting, diluting the impact of addressing my most urgent concern; legal and social equality for gay people worldwide. McKellen is a co-founder of Stonewall, an LGBT rights lobby group in the United Kingdom, named after the Stonewall riots. McKellen is also patron of LGBT History Month, Pride London, Oxford Pride, GAY-GLOS, LGBT Foundation and FFLAG where he appears in their video "Parents Talking". In 1994, at the closing ceremony of the Gay Games, he briefly took the stage to address the crowd, saying, "I'm Sir Ian McKellen, but you can call me Serena": This nickname, given to him by Stephen Fry, had been circulating within the gay community since McKellen's knighthood was conferred. In 2002, he was the Celebrity Grand Marshal of the San Francisco Pride Parade and he attended the Academy Awards with his then-boyfriend, New Zealander Nick Cuthell. In 2006, McKellen spoke at the pre-launch of the 2007 LGBT History Month in the UK, lending his support to the organisation and its founder, Sue Sanders. In 2007, he became a patron of The Albert Kennedy Trust, an organisation that provides support to young, homeless and troubled LGBT people. In 2006, he became a patron of Oxford Pride, stating: I send my love to all members of Oxford Pride, their sponsors and supporters, of which I am proud to be one ... Onlookers can be impressed by our confidence and determination to be ourselves and gay people, of whatever age, can be comforted by the occasion to take the first steps towards coming out and leaving the closet forever behind. McKellen has taken his activism internationally, and caused a major stir in Singapore, where he was invited to do an interview on a morning show and shocked the interviewer by asking if they could recommend him a gay bar; the programme immediately ended. In December 2008, he was named in Out's annual Out 100 list. In 2010, McKellen extended his support for Liverpool's Homotopia festival in which a group of gay and lesbian Merseyside teenagers helped to produce an anti-homophobia campaign pack for schools and youth centres across the city. In May 2011, he called Sergey Sobyanin, Moscow's mayor, a "coward" for refusing to allow gay parades in the city. In 2014, he was named in the top 10 on the World Pride Power list. In April 2010, along with actors Brian Cox and Eleanor Bron, McKellen appeared in a series of TV advertisements to support Age UK, the charity recently formed from the merger of Age Concern and Help the Aged. All three actors gave their time free of charge. A cricket fan since childhood, McKellen umpired in March 2011 for a charity cricket match in New Zealand to support earthquake victims of the February 2011 Christchurch earthquake. McKellen is an honorary board member for the New York- and Washington, D.C.-based organization Only Make Believe. Only Make Believe creates and performs interactive plays in children's hospitals and care facilities. He was honoured by the organisation in 2012 and hosted their annual Make Believe on Broadway Gala in November 2013. He garnered publicity for the organisation by stripping down to his Lord of the Rings underwear on stage. McKellen also has a history of supporting individual theatres. While in New Zealand filming The Hobbit in 2012, he announced a special New Zealand tour "Shakespeare, Tolkien and You!", with proceeds going to help save the Isaac Theatre Royal, which suffered extensive damage during the 2011 Christchurch earthquake. McKellen said he opted to help save the building as it was the last theatre he played in New Zealand (Waiting for Godot in 2010) and the locals' love for it made it a place worth supporting. In July 2017, he performed a new one-man show for a week at Park Theatre (London), donating the proceeds to the theatre. Together with a number of his Lord of the Rings co-stars (plus writer Philippa Boyens and director Peter Jackson), on 1 June 2020 McKellen joined Josh Gad's YouTube series Reunited Apart which reunites the cast of popular movies through video-conferencing, and promotes donations to non-profit charities. A friend of Ian Charleson and an admirer of his work, McKellen contributed an entire chapter to For Ian Charleson: A Tribute. A recording of McKellen's voice is heard before performances at the Royal Festival Hall, reminding patrons to ensure their mobile phones and watch alarms are switched off and to keep coughing to a minimum. He also took part in the 2012 Summer Paralympics opening ceremony in London as Prospero from Shakespeare's The Tempest. McKellen has received two Academy Award nominations for his performances in Gods and Monsters (1999), and The Lord of the Rings: The Fellowship of the Ring (2001). He has also received 5 Primetime Emmy Award nominations. McKellen has received two Tony Award nominations winning for Best Actor in a Play for his performance in Amadeus in 1981. He has also received 12 Laurence Olivier Awards (Olivier Awards) nominations winning 6 awards for his performances in Pillars of the Community (1977), The Alchemist (1978), Bent (1979), Wild Honey (1984), Richard III (1991), and Ian McKellen on Stage: With Tolkien, Shakespeare, Others and YOU (2020). He has also received various honorary awards including Pride International Film Festival's Lifetime Achievement & Distinction Award in 2004 and the Olivier Awards's Society Special Award in 2006. He also received Evening Standard Awards The Lebedev Special Award in 2009. The following year he received an Empire Award's Empire Icon Award In 2017 he received the Honorary Award from the Istanbul International Film Festival. BBC stated how his "performances have guaranteed him a place in the canon of English stage and film actors". McKellen was awarded a CBE in 1979, he was knighted in 1991 for services to the performing arts, and made a Member of the Order of the Companions of Honour for services to drama and to equality in the 2008 New Year Honours.
[ { "paragraph_id": 0, "text": "Sir Ian Murray McKellen CH CBE (born 25 May 1939) is an English actor. With a career spanning more than six decades, he is noted for his roles on the screen and stage in genres ranging from Shakespearean dramas and modern theatre to popular fantasy and science fiction. He is regarded as a British cultural icon and was knighted by Queen Elizabeth II in 1991. He has received numerous accolades, including a Tony Award, six Olivier Awards, and a Golden Globe Award as well as nominations for two Academy Awards, five BAFTA Awards and five Emmy Awards.", "title": "" }, { "paragraph_id": 1, "text": "McKellen made his stage debut in 1961 at the Belgrade Theatre as a member of its repertory company, and in 1965 made his first West End appearance. In 1969, he was invited to join the Prospect Theatre Company to play the lead parts in Shakespeare's Richard II and Marlowe's Edward II. In the 1970s McKellen became a stalwart of the Royal Shakespeare Company and the National Theatre of Great Britain. He has earned five Olivier Awards for his roles in Pillars of the Community (1977), The Alchemist (1978), Bent (1979), Wild Honey (1984), and Richard III (1995). McKellen made his Broadway debut in The Promise (1965). He went on to receive the Tony Award for Best Actor in a Play for his role as Antonio Salieri in Amadeus (1980). He was further nominated for Ian McKellen: Acting Shakespeare (1984). He returned to Broadway in Wild Honey (1986), Dance of Death (1990), No Man's Land (2013), and Waiting for Godot (2013), the latter being a joint production with Patrick Stewart.", "title": "" }, { "paragraph_id": 2, "text": "McKellen achieved worldwide fame for his film roles, including the titular King in Richard III (1995), James Whale in Gods and Monsters (1998), Magneto in the X-Men films, and Gandalf in The Lord of the Rings (2001–2003) and The Hobbit (2012–2014) trilogies. Other notable film roles include A Touch of Love (1969), Plenty (1985), Six Degrees of Separation (1993), Restoration (1995), Mr. Holmes (2015), and The Good Liar (2019).", "title": "" }, { "paragraph_id": 3, "text": "McKellen came out as gay in 1988, and has since championed LGBT social movements worldwide. He was awarded the Freedom of the City of London in October 2014. McKellen is a co-founder of Stonewall, an LGBT rights lobby group in the United Kingdom, named after the Stonewall riots. He is also patron of LGBT History Month, Pride London, Oxford Pride, GAY-GLOS, LGBT Foundation and FFLAG.", "title": "" }, { "paragraph_id": 4, "text": "McKellen was born on 25 May 1939 in Burnley, Lancashire, the son of Margery Lois (née Sutcliffe) and Denis Murray McKellen. He was their second child, with a sister, Jean, five years his senior. Shortly before the outbreak of the Second World War in September 1939, his family moved to Wigan. They lived there until Ian was twelve years old, before relocating to Bolton in 1951 after his father had been promoted. The experience of living through the war as a young child had a lasting impact on him, and he later said that \"only after peace resumed ... did I realise that war wasn't normal\". When an interviewer remarked that he seemed quite calm in the aftermath of the 11 September attacks, McKellen said: \"Well, darling, you forget—I slept under a steel plate until I was four years old\".", "title": "Early life and education" }, { "paragraph_id": 5, "text": "McKellen's father was a civil engineer and lay preacher, and was of Protestant Irish and Scottish descent. Both of McKellen's grandfathers were preachers, and his great-great-grandfather, James McKellen, was a \"strict, evangelical Protestant minister\" in Ballymena, County Antrim. His home environment was strongly Christian, but non-orthodox. \"My upbringing was of low nonconformist Christians who felt that you led the Christian life in part by behaving in a Christian manner to everybody you met\". When he was 12, his mother died of breast cancer; his father died when he was 25. After his coming out as gay to his stepmother, Gladys McKellen, who was a Quaker, he said, \"Not only was she not fazed, but as a member of a society which declared its indifference to people's sexuality years back, I think she was just glad for my sake that I wasn't lying any more\". His great-great-grandfather Robert J. Lowes was an activist and campaigner in the ultimately successful campaign for a Saturday half-holiday in Manchester, the forerunner to the modern five-day work week, thus making Lowes a \"grandfather of the modern weekend\".", "title": "Early life and education" }, { "paragraph_id": 6, "text": "McKellen attended Bolton School (Boys' Division), of which he is still a supporter, attending regularly to talk to pupils. McKellen's acting career started at Bolton Little Theatre, of which he is now the patron. An early fascination with the theatre was encouraged by his parents, who took him on a family outing to Peter Pan at the Opera House in Manchester when he was three. When he was nine, his main Christmas present was a fold-away wood and bakelite Victorian theatre from Pollocks Toy Theatres, with cardboard scenery and wires to push on the cut-outs of Cinderella and of Laurence Olivier's reenactment of Shakespeare's \"Hamlet\".", "title": "Early life and education" }, { "paragraph_id": 7, "text": "His sister took him to his first Shakespeare play, Twelfth Night, by the amateurs of Wigan's Little Theatre, shortly followed by their Macbeth and Wigan High School for Girls' production of A Midsummer Night's Dream, with music by Mendelssohn, with the role of Bottom played by Jean McKellen, who continued to act, direct, and produce amateur theatre until her death.", "title": "Early life and education" }, { "paragraph_id": 8, "text": "In 1958, McKellen, at the age of 18, won a scholarship to St Catharine's College, Cambridge, where he read English literature. He has since been made an Honorary Fellow of the college. While at Cambridge, McKellen was a member of the Marlowe Society, where he appeared in 23 plays over the course of 3 years. At that young age he was already giving performances that have since become legendary such as his Justice Shallow in Henry IV alongside Trevor Nunn and Derek Jacobi (March 1959), Cymbeline (as Posthumus, opposite Margaret Drabble as Imogen) and Doctor Faustus. During this period McKellen had already been directed by Peter Hall, John Barton and Dadie Rylands, all of whom would have a significant impact on McKellen's future career.", "title": "Early life and education" }, { "paragraph_id": 9, "text": "McKellen made his first professional appearance in 1961 at the Belgrade Theatre in Coventry, as Roper in A Man for All Seasons, although an audio recording of the Marlowe Society's Cymbeline had gone on commercial sale as part of the Argo Shakespeare series. After four years in regional repertory theatres, McKellen made his first West End appearance, in A Scent of Flowers, regarded as a \"notable success\". In 1965 he was a member of Laurence Olivier's National Theatre Company at the Old Vic, which led to roles at the Chichester Festival. With the Prospect Theatre Company, McKellen made his breakthrough performances of Shakespeare's Richard II (directed by Richard Cottrell) and Christopher Marlowe's Edward II (directed by Toby Robertson) at the Edinburgh Festival in 1969, the latter causing a storm of protest over the enactment of the homosexual Edward's lurid death.", "title": "Career" }, { "paragraph_id": 10, "text": "One of McKellen's first major roles on television was as the title character in the BBC's 1966 adaptation of David Copperfield, which achieved 12 million viewers on its initial airings. After some rebroadcasting in the late 60s, the master videotapes for the serial were wiped, and only four scattered episodes (3, 8, 9 and 11) survive as telerecordings, three of which feature McKellen as adult David. McKellen had taken film roles throughout his career—beginning in 1969 with his role of George Matthews in A Touch of Love, and his first leading role was in 1980 as D. H. Lawrence in Priest of Love, but it was not until the 1990s that he became more widely recognised in this medium after several roles in blockbuster Hollywood films. In 1969, McKellen starred in three films, Michael Hayes's The Promise, Clive Donner's epic film Alfred the Great, and Waris Hussein's A Touch of Love (1969).", "title": "Career" }, { "paragraph_id": 11, "text": "In the 1970s, McKellen became a well-known figure in British theatre, performing frequently at the Royal Shakespeare Company and the Royal National Theatre, where he played several leading Shakespearean roles. From 1973 to 1974, McKellen toured the United Kingdom and Brooklyn Academy of Music portraying Lady Wishfort's Footman, Kruschov, and Edgar in the William Congreve comedy The Way of the World, Anton Chekov's comedic three-act play The Wood Demon and William Shakespeare tragedy King Lear. The following year, he starred in Shakespeare's King John, George Colman's The Clandestine Marriage, and George Bernard Shaw's Too True to Be Good. From 1976 to 1977 he portrayed Romeo in the Shakespeare romance Romeo & Juliet at the Royal Shakespeare Theatre. The following year he played King Leontes in The Winter's Tale.", "title": "Career" }, { "paragraph_id": 12, "text": "In 1976, McKellen played the title role in William Shakespeare's Macbeth at Stratford in a \"gripping ... out of the ordinary\" production, with Judi Dench, and Iago in Othello, in award-winning productions directed by Trevor Nunn. Both of these productions were adapted into television films, also directed by Nunn. From 1978 to 1979 he toured in a double feature production of Shakespeare's Twelfth Night, and Anton Chekov's Three Sisters portraying Sir Toby Belch and Andrei, respectively. In 1979, McKellen gained acclaim for his role as Antonio Salieri in the Broadway transfer production of Peter Shaffer's play Amadeus. It was an immensely popular play produced by the National Theatre originally starring Paul Scofield. The transfer starred McKellen, Tim Curry as Wolfgang Amadeus Mozart, and Jane Seymour as Constanze Mozart. The New York Times theatre critic Frank Rich wrote of McKellen's performance \"In Mr. McKellen's superb performance, Salieri's descent into madness was portrayed in dark notes of almost bone-rattling terror\". For his performance, McKellen received the Tony Award for Best Actor in a Play.", "title": "Career" }, { "paragraph_id": 13, "text": "In 1981, McKellen portrayed writer and poet D. H. Lawrence in the Christopher Miles directed biographical film, Priest of Love. He followed up with Michael Mann's horror film The Keep (1983). In 1985, he starred in Plenty, the film adaptation of the David Hare play of the same name. The film was directed by Fred Schepisi and starred Meryl Streep, Charles Dance, John Gielgud, and Sting. The film spans nearly 20 years from the early 1940s to the 1960s, around an Englishwoman's experiences as a fighter for the French Resistance during World War II when she has a one-night stand with a British intelligence agent. The film received mixed reviews with Roger Ebert of The Chicago Sun-Times praising the film's ensemble cast writing, \"The performances in the movie supply one brilliant solo after another; most of the big moments come as characters dominate the scenes they are in\".", "title": "Career" }, { "paragraph_id": 14, "text": "In 1986, he returned to Broadway in the revival of Anton Chekhov's first play Wild Honey alongside Kim Cattrall and Kate Burton. The play concerned a local Russian schoolteacher who struggles to remain faithful to his wife, despite the attention of three other women. McKellen received mixed reviews from critics in particular Frank Rich of The New York Times who praised him for his \"bravura and athletically graceful technique that provides everything except, perhaps, the thing that matters most—sustained laughter\". He later wrote, \"Mr. McKellen finds himself in the peculiar predicament of the star who strains to carry a frail supporting cast\". In 1989 he played Iago in production of Othello by the Royal Shakespeare Company. McKellen starred in the British drama Scandal (1989) a fictionalised account of the Profumo affair that rocked the government of British prime minister Harold Macmillan. McKellen portrayed John Profumo. The film starred Joanne Whalley, and John Hurt. The film premiered at the 1989 Cannes Film Festival and competed for the Palme d'Or. When his friend and colleague, Patrick Stewart, decided to accept the role of Captain Jean-Luc Picard in the American television series, Star Trek: The Next Generation, McKellan strongly advised him not to throw away his respected theatrical career to work in television. However, McKellan later conceded that Stewart had been prudent in accepting the role, which made him a global star and later followed his example such as co-starring with Stewart in the X-Men superhero film series.", "title": "Career" }, { "paragraph_id": 15, "text": "From 1990 to 1992, he acted in a world tour of a lauded revival of Richard III, playing the title character. The production played at the Brooklyn Academy of Music for two weeks before continuing its tour where Frank Rich of New York Times was able to review it. In his piece, he praised McKellen's performance writing, \"Mr McKellen's highly sophisticated sense of theatre and fun drives him to reveal the secrets of how he pulls his victims' strings whether he is addressing the audience in a soliloquy or not\". For his performance he received the Laurence Olivier Award for Best Actor.", "title": "Career" }, { "paragraph_id": 16, "text": "In 1992, he acted in Pam Gems's revival of Chekov's Uncle Vanya at the Royal National Theatre alongside Antony Sher, and Janet McTeer. In 1993, he starred in the film Six Degrees of Separation based on the Pulitzer Prize and Tony Award nominated play of the same name. McKellen starred alongside Will Smith, Donald Sutherland and Stockard Channing. The film was a critical success. That same year, he also appeared in the western The Ballad of Little Jo opposite Bob Hoskins and the action comedy Last Action Hero starring Arnold Schwarzenegger. The following year, he appeared in the superhero film The Shadow with Alec Baldwin and the James L. Brooks directed comedy I'll Do Anything starring Nick Nolte.", "title": "Career" }, { "paragraph_id": 17, "text": "In 1995, McKellen made his screenwriting debut with Richard III, an ambitious adaptation of William Shakespeare's play of the same name, directed by Richard Loncraine. The film reimagines the play's story and characters to a setting based on 1930s Britain, with Richard depicted as a fascist plotting to usurp the throne. McKellen stars in the title role alongside an ensemble cast including Annette Bening, Robert Downey Jr., Jim Broadbent, Kristen Scott Thomas, Nigel Hawthorne and Dame Maggie Smith. As executive producer he returned his £50,000 fee to complete the filming of the final battle. In his review of the film, The Washington Post film critic Hal Hinson called McKellen's performance a \"lethally flamboyant incarnation\" and said his \"florid mastery ... dominates everything\". Film critic Roger Ebert of the Chicago Sun-Times praised McKellen's adaptation and his performance in his four star review writing, \"McKellen has a deep sympathy for the playwright ... Here he brings to Shakespeare's most tortured villain a malevolence we are moved to pity. No man should be so evil, and know it. Hitler and others were more evil, but denied out to themselves. There is no escape for Richard. He is one of the first self-aware characters in the theatre, and for that distinction he must pay the price\". His performance in the title role garnered BAFTA and Golden Globe nominations for Best Actor and won the European Film Award for Best Actor. His screenplay was nominated for the BAFTA Award for Best Adapted Screenplay. That same year, he appeared in the historical drama Restoration (1995) also starring Downey Jr., as well as Meg Ryan, Hugh Grant, and David Thewlis. He also appeared in the British romantic comedy Jack and Sarah (1995) starring Richard E. Grant, Samantha Mathis, and Judi Dench.", "title": "Career" }, { "paragraph_id": 18, "text": "In 1993, he appeared in minor roles in the television miniseries Tales of the City, based on the novel by his friend Armistead Maupin. Later that year, McKellen appeared in the HBO television film And the Band Played On based on the acclaimed novel of the same name about the discovery of HIV. For his performance as gay rights activist Bill Kraus, McKellen received the CableACE Award for Supporting Actor in a Movie or Miniseries and was nominated for the Primetime Emmy Award for Outstanding Supporting Actor in a Miniseries or a Movie. From 1993 to 1997 McKellen toured in a one-man show entitled, A Knights Out, about coming out as a gay man. Laurie Winer from The Los Angeles Times wrote, \"Even if he is preaching to the converted, McKellen makes us aware of the vast and powerful intolerance outside the comfortable walls of the theatre. Endowed with a rare technique, he is a natural storyteller, an admirable human being and a hands-on activist\". From 1997 to 1998, he starred as Dr. Tomas Stockmann in a revival of Henrik Ibsen's An Enemy of the People. Later that year he played Garry Essendine in the Noël Coward comedy Present Laughter at the West Yorkshire Playhouse. In 1998, he appeared in the modestly acclaimed psychological thriller Apt Pupil, which was directed by Bryan Singer and based on a story by Stephen King. McKellen portrayed a fugitive Nazi officer living under a false name in the US who is befriended by a curious teenager (Brad Renfro) who threatens to expose him unless he tells his story in detail. That same year, he played James Whale, the director of Frankenstein in the Bill Condon directed period drama Gods and Monsters, a role for which he was subsequently nominated for the Academy Award for Best Actor, losing it to Roberto Benigni in Life is Beautiful (1998).", "title": "Career" }, { "paragraph_id": 19, "text": "In 1995, he appeared in the BBC television comedy film Cold Comfort Farm starring Kate Beckinsale, Rufus Sewell, and Stephen Fry. The following year he starred as Tsar Nicholas II in the HBO made-for-television movie Rasputin: Dark Servant of Destiny (1996) starring Alan Rickman as Rasputin. For his performance, McKellen earned a Primetime Emmy Award for Outstanding Supporting Actor in a Limited Series or Movie nomination and received a Golden Globe Award for Best Supporting Actor – Series, Miniseries or Television Film win. McKellen appeared as Mr Creakle in the BBC series David Copperfield (1999) based on the Charles Dickens classic novel. The miniseries starred a pre-Harry Potter Daniel Radcliffe, Bob Hoskins, and Dame Maggie Smith.", "title": "Career" }, { "paragraph_id": 20, "text": "In 1999, McKellen was cast, again under the direction of Bryan Singer, to play the comic book supervillain Magneto in the 2000 film X-Men and its sequels X2: X-Men United (2003) and X-Men: The Last Stand (2006). He later reprised his role of Magneto in 2014's X-Men: Days of Future Past, sharing the role with Michael Fassbender, who played a younger version of the character in 2011's X-Men: First Class.", "title": "Career" }, { "paragraph_id": 21, "text": "While filming the first X-Men film in 1999, McKellen was cast as the wizard Gandalf in Peter Jackson's film trilogy adaptation of The Lord of the Rings (consisting of The Fellowship of the Ring, The Two Towers, and The Return of the King), released between 2001 and 2003. He won the Screen Actors Guild Award for Best Supporting Actor in a Motion Picture for his work in The Fellowship of the Ring and was nominated for the Academy Award for Best Supporting Actor for the same role. He provided the voice of Gandalf for several video game adaptations of the Lord of the Rings films.", "title": "Career" }, { "paragraph_id": 22, "text": "McKellen returned to the Broadway stage in 2001 in an August Strindberg play The Dance of Death alongside Helen Mirren, and David Strathairn at the Broadhurst Theatre. The New York Times Theatre critic Ben Brantley praised McKellen's performance writing, \"[McKellen] returns to Broadway to serve up an Elysian concoction we get to sample too little these days: a mixture of heroic stage presence, actorly intelligence, and rarefied theatrical technique\". McKellen toured with the production at the Lyric Theatre in London's West End and to the Sydney Art's Festival in Australia. On 16 March 2002, he hosted Saturday Night Live. In 2003, McKellen made a guest appearance as himself on the American cartoon show The Simpsons in a special British-themed episode entitled \"The Regina Monologues\", along with the then UK Prime Minister Tony Blair and author J. K. Rowling. In April and May 2005, he played the role of Mel Hutchwright in Granada Television's long-running British soap opera, Coronation Street, fulfilling a lifelong ambition. He narrated Richard Bell's film Eighteen as a grandfather who leaves his World War II memoirs on audio-cassette for his teenage grandson.", "title": "Career" }, { "paragraph_id": 23, "text": "McKellen has appeared in limited release films, such as Emile (which was shot in three weeks following the X2 shoot), Neverwas and Asylum. In 2006, he appeared as Sir Leigh Teabing in The Da Vinci Code opposite Tom Hanks as Robert Langdon. During a 17 May 2006 interview on The Today Show with the Da Vinci Code cast and director Ron Howard, Matt Lauer posed a question to the group about how they would have felt if the film had borne a prominent disclaimer that it is a work of fiction, as some religious groups wanted. McKellen responded, \"I've often thought the Bible should have a disclaimer in the front saying 'This is fiction'. I mean, walking on water? It takes ... an act of faith. And I have faith in this movie—not that it's true, not that it's factual, but that it's a jolly good story\". He continued, \"And I think audiences are clever enough and bright enough to separate out fact and fiction, and discuss the thing when they've seen it\".", "title": "Career" }, { "paragraph_id": 24, "text": "McKellen appeared in the 2006 BBC series of Ricky Gervais's comedy series Extras, where he played himself directing Gervais's character Andy Millman in a play about gay lovers. McKellen received a 2007 Primetime Emmy Award for Outstanding Guest Actor – Comedy Series nomination for his performance. In 2007, McKellen narrated the romantic fantasy adventure film Stardust starring Charlie Cox and Claire Danes, which was a critical and financial success. That same year, he lent his voice to the armored bear Iorek Byrnison in the Chris Weitz-directed fantasy film The Golden Compass based on the acclaimed Philip Pullman novel Northern Lights and starred Nicole Kidman and Daniel Craig. The film received mixed reviews but was a financial success.", "title": "Career" }, { "paragraph_id": 25, "text": "In 2007, he returned to the Royal Shakespeare Company, in productions of King Lear and The Seagull, both directed by Trevor Nunn. In 2009 he portrayed Number Two in The Prisoner, a remake of the 1967 cult series The Prisoner. In 2009, he appeared in a very popular revival of Waiting for Godot at London's Haymarket Theatre, directed by Sean Mathias, and playing opposite Patrick Stewart. From 2013 to 2014, McKellen and Stewart starred in a double production of Samuel Beckett's Waiting for Godot and Harold Pinter's No Man's Land on Broadway at the Cort Theatre. Variety theatre critic Marilyn Stasio praised the dual production writing, \"McKellen and Stewart find plenty of consoling comedy in two masterpieces of existential despair\". In both productions of Stasio claims, \"the two thespians play the parts they were meant to play\". He is Patron of English Touring Theatre and also President and Patron of the Little Theatre Guild of Great Britain, an association of amateur theatre organisations throughout the UK. In late August 2012, he took part in the opening ceremony of the London Paralympics, portraying Prospero from The Tempest.", "title": "Career" }, { "paragraph_id": 26, "text": "McKellen reprised the role of Gandalf on screen in Peter Jackson's three-part film adaptation of The Hobbit starting with The Hobbit: An Unexpected Journey (2012), followed by The Hobbit: The Desolation of Smaug (2013), and finally The Hobbit: The Battle of the Five Armies (2014). Despite the series receiving mixed reviews, it emerged as a financial success. McKellen also reprised his role as Erik Lehnsherr/Magneto in James Mangold's The Wolverine (2013), and Singer's X-Men: Days of Future Past (2014). In November 2013, McKellen appeared in the Doctor Who 50th anniversary comedy homage The Five(ish) Doctors Reboot. From 2013 to 2016, McKellen co-starred in the ITV sitcom Vicious as Freddie Thornhill, alongside Derek Jacobi. The series revolves around an elderly gay couple who have been together for 50 years. The show's original title was \"Vicious Old Queens\". There are ongoing jokes about McKellen's career as a relatively unsuccessful character actor who owns a tux because he stole it after doing a guest spot on \"Downton Abbey\" and that he holds the title of \"10th Most Popular ‘Doctor Who’ Villain\". Liz Shannon Miller of IndieWire noted while the concept seemed, \"weird as hell\", that \"Once you come to accept McKellen and Jacobi in a multi-camera format, there is a lot to respect about their performances; specifically, the way that those decades of classical training adapt themselves to the sitcom world. Much has been written before about how the tradition of the multi-cam, filmed in front of a studio audience, relates to theatre, and McKellen and Jacobi know how to play to a live crowd\".", "title": "Career" }, { "paragraph_id": 27, "text": "In 2015, McKellen reunited with director Bill Condon playing an elderly Sherlock Holmes in the mystery film Mr. Holmes alongside Laura Linney. In the film based on the novel A Slight Trick of the Mind (2005), Holmes now 93, struggles to recall the details of his final case because his mind is slowly deteriorating. The film premiered at the 65th Berlin International Film Festival with McKellen receiving acclaim for his performance. Rolling Stone film critic Peter Travers praised his performance writing, \"Don't think you can take another Hollywood version of Sherlock Holmes? Snap out of it. Apologies to Robert Downey Jr. and Benedict Cumberbatch, but what Ian McKellen does with Arthur Conan Doyle's fictional detective in Mr Holmes is nothing short of magnificent ... Director Bill Condon, who teamed superbly with McKellen on the Oscar-winning Gods and Monsters, brings us a riveting character study of a lion not going gentle into winter\". In October 2015, McKellen appeared as Norman to Anthony Hopkins's Sir in a BBC Two production of Ronald Harwood's The Dresser, alongside Edward Fox, Vanessa Kirby, and Emily Watson. Television critic Tim Goodman of The Hollywood Reporter praised the film and the central performances writing, \"there's no escaping that Hopkins and McKellen are the central figures here, giving wonderfully nuanced performances, onscreen together for their first time in their acclaimed careers\". For his performance McKellen received a British Academy Television Award nomination for his performance.", "title": "Career" }, { "paragraph_id": 28, "text": "In 2017, McKellen portrayed in a supporting role as Cogsworth (originally voiced by David Ogden Stiers in the 1991 animated film) in the live-action adaptation of Disney's Beauty and the Beast, directed by Bill Condon (which marked the third collaboration between Condon and McKellen, after Gods and Monsters and Mr. Holmes) and co-starred alongside Emma Watson and Dan Stevens. The film was released to positive reviews and grossed $1.2 billion worldwide, making it the highest-grossing live-action musical film, the second highest-grossing film of 2017, and the 17th highest-grossing film of all time. In 2017, McKellen appeared in the documentary McKellen: Playing the Part, directed by director Joe Stephenson. The documentary explores McKellen's life and career as an actor.", "title": "Career" }, { "paragraph_id": 29, "text": "In October 2017, McKellen played King Lear at the Chichester Festival Theatre, a role which he said was likely to be his \"last big Shakespearean part\". He performed the play at the Duke of York's Theatre in London's West End during the summer of 2018. To celebrate his 80th birthday, in 2019 McKellen performed in a one-man stage show titled Ian McKellen on Stage: With Tolkien, Shakespeare, Others and YOU celebrating the various performances throughout his career. The show toured across the UK and Ireland (raising money for each venue and organisation's charity) before a West End run at the Harold Pinter Theatre and was performed for one night only on Broadway at the Hudson Theatre.", "title": "Career" }, { "paragraph_id": 30, "text": "The following year, he appeared in Kenneth Branagh's historical drama All is True (2018) portraying Henry Wriothesley, 3rd Earl of Southampton, opposite Branagh and Judi Dench. In 2019, he reunited with Condon for a fourth time in the mystery thriller The Good Liar opposite Helen Mirren, who received praise for their onscreen chemistry. That same year, he appeared as Gus the Theatre Cat in the movie musical adaptation of Cats directed by Tom Hooper. The film featured performances from Jennifer Hudson, James Corden, Rebel Wilson, Idris Elba, and Judi Dench. The film was widely panned for its poor visual effects, editing, performances, screenplay, and was a box office disaster. In 2021, he played the title role in an age-blind production of Hamlet (having previously played the part in a UK and European tour in 1971), followed by the role of Firs in Chekov's The Cherry Orchard at the Theatre Royal, Windsor.", "title": "Career" }, { "paragraph_id": 31, "text": "Since November 2021, McKellen and ABBA member Björn Ulvaeus have posted Instagram videos featuring the pair knitting Christmas jumpers and other festive attire. In 2023, it was revealed that Ulvaeus and McKellen would be knitting stagewear for Kylie Minogue as part of her More Than Just a Residency concert residency at Voltaire at The Venetian Las Vegas.", "title": "Career" }, { "paragraph_id": 32, "text": "In 2023 he is set to star in period thriller The Critic directed by Anand Tucker. The film is written by Patrick Marber adapted off the 2015 novel Curtain Call by Anthony Quinn. The film will premiere at the 2023 Toronto International Film Festival.", "title": "Career" }, { "paragraph_id": 33, "text": "McKellen and his first partner, Brian Taylor, a history teacher from Bolton, began their relationship in 1964. Their relationship lasted for eight years, ending in 1972. They lived in Earls Terrace, Kensington, London, where McKellen continued to pursue his career as an actor. In 1978 he met his second partner, Sean Mathias, at the Edinburgh Festival. This relationship lasted until 1988, and according to Mathias, it was tempestuous, with conflicts over McKellen's success in acting versus Mathias's somewhat less-successful career. The two remained friends, with Mathias later directing McKellen in Waiting for Godot at the Theatre Royal Haymarket in 2009. The pair entered into a business partnership with Evgeny Lebedev, purchasing the lease of The Grapes public house in Narrow Street. As of 2005, McKellen had been living in Narrow Street, Limehouse, for more than 25 years, more than a decade of which had been spent in a five-storey Victorian conversion.", "title": "Personal life" }, { "paragraph_id": 34, "text": "McKellen is an atheist. In the late 1980s, he lost his appetite for every kind of meat but fish, and has since followed a mainly pescetarian diet. In 2001, Ian McKellen received the Artist Citizen of the World Award (France).", "title": "Personal life" }, { "paragraph_id": 35, "text": "McKellen has a tattoo of the Elvish number nine, written using J. R. R. Tolkien's constructed script of Tengwar, on his shoulder in reference to his involvement in the Lord of the Rings and the fact that his character was one of the original nine companions of the Fellowship of the Ring. All but one of the other actors of \"The Fellowship\" (Elijah Wood, Sean Astin, Orlando Bloom, Billy Boyd, Sean Bean, Dominic Monaghan and Viggo Mortensen) have the same tattoo (John Rhys-Davies did not get the tattoo, but his stunt double Brett Beattie did).", "title": "Personal life" }, { "paragraph_id": 36, "text": "McKellen was diagnosed with prostate cancer in 2006. In 2012, he stated on his blog that \"There is no cause for alarm. I am examined regularly and the cancer is contained. I've not needed any treatment\".", "title": "Personal life" }, { "paragraph_id": 37, "text": "McKellen became an ordained minister of the Universal Life Church in early 2013 in order to preside over the marriage of his friend and X-Men co-star Patrick Stewart to the singer Sunny Ozell.", "title": "Personal life" }, { "paragraph_id": 38, "text": "McKellen was awarded an honorary Doctorate of Letters by Cambridge University on 18 June 2014. He was made a Freeman of the City of London on Thursday 30 October 2014. The ceremony took place at Guildhall in London. He was nominated by London's Lord Mayor Fiona Woolf, who said he was an \"exceptional actor\" and \"tireless campaigner for equality\". He is also an Emeritus Fellow of St Catherine's College, Oxford.", "title": "Personal life" }, { "paragraph_id": 39, "text": "While McKellen had made his sexual orientation known to fellow actors early on in his stage career, it was not until 1988 that he came out to the general public while appearing on the BBC Radio programme Third Ear hosted by conservative journalist Peregrine Worsthorne. The context that prompted McKellen's decision, overriding any concerns about a possible negative effect on his career, was that the controversial Section 28 of the Local Government Bill, known simply as Section 28, was then under consideration in the British Parliament. Section 28 proposed prohibiting local authorities from promoting homosexuality \"... as a kind of pretended family relationship\". McKellen has stated that he was influenced in his decision by the advice and support of his friends, among them noted gay author Armistead Maupin. In a 1998 interview that discusses the 29th anniversary of the Stonewall riots, McKellen commented,", "title": "Activism" }, { "paragraph_id": 40, "text": "I have many regrets about not having come out earlier, but one of them might be that I didn't engage myself in the politicking.", "title": "Activism" }, { "paragraph_id": 41, "text": "He has said of this period:", "title": "Activism" }, { "paragraph_id": 42, "text": "My own participating in that campaign was a focus for people [to] take comfort that if Ian McKellen was on board for this, perhaps it would be all right for other people to be as well, gay and straight.", "title": "Activism" }, { "paragraph_id": 43, "text": "Section 28 was, however, enacted and remained on the statute books until 2000 in Scotland and 2003 in England and Wales. Section 28 never applied in Northern Ireland.", "title": "Activism" }, { "paragraph_id": 44, "text": "In 2003, during an appearance on Have I Got News For You, McKellen claimed when he visited Michael Howard, then Environment Secretary (responsible for local government), in 1988 to lobby against Section 28, Howard refused to change his position but did ask him to leave an autograph for his children. McKellen agreed, but wrote, \"Fuck off, I'm gay\". McKellen described Howard's junior ministers, Conservatives David Wilshire and Jill Knight, who were the architects of Section 28, as the 'ugly sisters' of a political pantomime.", "title": "Activism" }, { "paragraph_id": 45, "text": "McKellen has continued to be very active in LGBT rights efforts. In a statement on his website regarding his activism, the actor commented:", "title": "Activism" }, { "paragraph_id": 46, "text": "I have been reluctant to lobby on other issues I most care about—nuclear weapons (against), religion (atheist), capital punishment (anti), AIDS (fund-raiser) because I never want to be forever spouting, diluting the impact of addressing my most urgent concern; legal and social equality for gay people worldwide.", "title": "Activism" }, { "paragraph_id": 47, "text": "McKellen is a co-founder of Stonewall, an LGBT rights lobby group in the United Kingdom, named after the Stonewall riots. McKellen is also patron of LGBT History Month, Pride London, Oxford Pride, GAY-GLOS, LGBT Foundation and FFLAG where he appears in their video \"Parents Talking\".", "title": "Activism" }, { "paragraph_id": 48, "text": "In 1994, at the closing ceremony of the Gay Games, he briefly took the stage to address the crowd, saying, \"I'm Sir Ian McKellen, but you can call me Serena\": This nickname, given to him by Stephen Fry, had been circulating within the gay community since McKellen's knighthood was conferred. In 2002, he was the Celebrity Grand Marshal of the San Francisco Pride Parade and he attended the Academy Awards with his then-boyfriend, New Zealander Nick Cuthell. In 2006, McKellen spoke at the pre-launch of the 2007 LGBT History Month in the UK, lending his support to the organisation and its founder, Sue Sanders. In 2007, he became a patron of The Albert Kennedy Trust, an organisation that provides support to young, homeless and troubled LGBT people.", "title": "Activism" }, { "paragraph_id": 49, "text": "In 2006, he became a patron of Oxford Pride, stating:", "title": "Activism" }, { "paragraph_id": 50, "text": "I send my love to all members of Oxford Pride, their sponsors and supporters, of which I am proud to be one ... Onlookers can be impressed by our confidence and determination to be ourselves and gay people, of whatever age, can be comforted by the occasion to take the first steps towards coming out and leaving the closet forever behind.", "title": "Activism" }, { "paragraph_id": 51, "text": "McKellen has taken his activism internationally, and caused a major stir in Singapore, where he was invited to do an interview on a morning show and shocked the interviewer by asking if they could recommend him a gay bar; the programme immediately ended. In December 2008, he was named in Out's annual Out 100 list.", "title": "Activism" }, { "paragraph_id": 52, "text": "In 2010, McKellen extended his support for Liverpool's Homotopia festival in which a group of gay and lesbian Merseyside teenagers helped to produce an anti-homophobia campaign pack for schools and youth centres across the city. In May 2011, he called Sergey Sobyanin, Moscow's mayor, a \"coward\" for refusing to allow gay parades in the city.", "title": "Activism" }, { "paragraph_id": 53, "text": "In 2014, he was named in the top 10 on the World Pride Power list.", "title": "Activism" }, { "paragraph_id": 54, "text": "In April 2010, along with actors Brian Cox and Eleanor Bron, McKellen appeared in a series of TV advertisements to support Age UK, the charity recently formed from the merger of Age Concern and Help the Aged. All three actors gave their time free of charge.", "title": "Activism" }, { "paragraph_id": 55, "text": "A cricket fan since childhood, McKellen umpired in March 2011 for a charity cricket match in New Zealand to support earthquake victims of the February 2011 Christchurch earthquake.", "title": "Activism" }, { "paragraph_id": 56, "text": "McKellen is an honorary board member for the New York- and Washington, D.C.-based organization Only Make Believe. Only Make Believe creates and performs interactive plays in children's hospitals and care facilities. He was honoured by the organisation in 2012 and hosted their annual Make Believe on Broadway Gala in November 2013. He garnered publicity for the organisation by stripping down to his Lord of the Rings underwear on stage.", "title": "Activism" }, { "paragraph_id": 57, "text": "McKellen also has a history of supporting individual theatres. While in New Zealand filming The Hobbit in 2012, he announced a special New Zealand tour \"Shakespeare, Tolkien and You!\", with proceeds going to help save the Isaac Theatre Royal, which suffered extensive damage during the 2011 Christchurch earthquake. McKellen said he opted to help save the building as it was the last theatre he played in New Zealand (Waiting for Godot in 2010) and the locals' love for it made it a place worth supporting. In July 2017, he performed a new one-man show for a week at Park Theatre (London), donating the proceeds to the theatre.", "title": "Activism" }, { "paragraph_id": 58, "text": "Together with a number of his Lord of the Rings co-stars (plus writer Philippa Boyens and director Peter Jackson), on 1 June 2020 McKellen joined Josh Gad's YouTube series Reunited Apart which reunites the cast of popular movies through video-conferencing, and promotes donations to non-profit charities.", "title": "Activism" }, { "paragraph_id": 59, "text": "A friend of Ian Charleson and an admirer of his work, McKellen contributed an entire chapter to For Ian Charleson: A Tribute. A recording of McKellen's voice is heard before performances at the Royal Festival Hall, reminding patrons to ensure their mobile phones and watch alarms are switched off and to keep coughing to a minimum. He also took part in the 2012 Summer Paralympics opening ceremony in London as Prospero from Shakespeare's The Tempest.", "title": "Activism" }, { "paragraph_id": 60, "text": "McKellen has received two Academy Award nominations for his performances in Gods and Monsters (1999), and The Lord of the Rings: The Fellowship of the Ring (2001). He has also received 5 Primetime Emmy Award nominations. McKellen has received two Tony Award nominations winning for Best Actor in a Play for his performance in Amadeus in 1981. He has also received 12 Laurence Olivier Awards (Olivier Awards) nominations winning 6 awards for his performances in Pillars of the Community (1977), The Alchemist (1978), Bent (1979), Wild Honey (1984), Richard III (1991), and Ian McKellen on Stage: With Tolkien, Shakespeare, Others and YOU (2020).", "title": "Accolades and honours" }, { "paragraph_id": 61, "text": "He has also received various honorary awards including Pride International Film Festival's Lifetime Achievement & Distinction Award in 2004 and the Olivier Awards's Society Special Award in 2006. He also received Evening Standard Awards The Lebedev Special Award in 2009. The following year he received an Empire Award's Empire Icon Award In 2017 he received the Honorary Award from the Istanbul International Film Festival. BBC stated how his \"performances have guaranteed him a place in the canon of English stage and film actors\".", "title": "Accolades and honours" }, { "paragraph_id": 62, "text": "McKellen was awarded a CBE in 1979, he was knighted in 1991 for services to the performing arts, and made a Member of the Order of the Companions of Honour for services to drama and to equality in the 2008 New Year Honours.", "title": "Accolades and honours" } ]
Sir Ian Murray McKellen is an English actor. With a career spanning more than six decades, he is noted for his roles on the screen and stage in genres ranging from Shakespearean dramas and modern theatre to popular fantasy and science fiction. He is regarded as a British cultural icon and was knighted by Queen Elizabeth II in 1991. He has received numerous accolades, including a Tony Award, six Olivier Awards, and a Golden Globe Award as well as nominations for two Academy Awards, five BAFTA Awards and five Emmy Awards. McKellen made his stage debut in 1961 at the Belgrade Theatre as a member of its repertory company, and in 1965 made his first West End appearance. In 1969, he was invited to join the Prospect Theatre Company to play the lead parts in Shakespeare's Richard II and Marlowe's Edward II. In the 1970s McKellen became a stalwart of the Royal Shakespeare Company and the National Theatre of Great Britain. He has earned five Olivier Awards for his roles in Pillars of the Community (1977), The Alchemist (1978), Bent (1979), Wild Honey (1984), and Richard III (1995). McKellen made his Broadway debut in The Promise (1965). He went on to receive the Tony Award for Best Actor in a Play for his role as Antonio Salieri in Amadeus (1980). He was further nominated for Ian McKellen: Acting Shakespeare (1984). He returned to Broadway in Wild Honey (1986), Dance of Death (1990), No Man's Land (2013), and Waiting for Godot (2013), the latter being a joint production with Patrick Stewart. McKellen achieved worldwide fame for his film roles, including the titular King in Richard III (1995), James Whale in Gods and Monsters (1998), Magneto in the X-Men films, and Gandalf in The Lord of the Rings (2001–2003) and The Hobbit (2012–2014) trilogies. Other notable film roles include A Touch of Love (1969), Plenty (1985), Six Degrees of Separation (1993), Restoration (1995), Mr. Holmes (2015), and The Good Liar (2019). McKellen came out as gay in 1988, and has since championed LGBT social movements worldwide. He was awarded the Freedom of the City of London in October 2014. McKellen is a co-founder of Stonewall, an LGBT rights lobby group in the United Kingdom, named after the Stonewall riots. He is also patron of LGBT History Month, Pride London, Oxford Pride, GAY-GLOS, LGBT Foundation and FFLAG.
2001-12-02T22:00:35Z
2023-12-14T08:22:54Z
[ "Template:IMDb name", "Template:Navboxes", "Template:Short description", "Template:Sfn", "Template:Reflist", "Template:Webarchive", "Template:Citation", "Template:Use British English", "Template:Infobox person", "Template:Cite news", "Template:Cite magazine", "Template:Use dmy dates", "Template:Wikiquote", "Template:Good article", "Template:Cite web", "Template:IBDB name", "Template:Portal bar", "Template:Post-nominals", "Template:Blockquote", "Template:'s", "Template:Main", "Template:London Gazette", "Template:Commons", "Template:Screenonline name", "Template:Nbsp", "Template:Cite journal", "Template:Cite book", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Ian_McKellen
15,309
Intellivision
The Intellivision is a home video game console released by Mattel Electronics in 1979. The name is a portmanteau of "intelligent television". Development began in 1977, the same year as the launch of its main competitor, the Atari 2600. In 1984, Mattel sold its video game assets to a former Mattel Electronics executive and investors, eventually becoming INTV Corporation. Game development ran from 1978 to 1990, when the Intellivision was discontinued. From 1980 to 1983, more than 3.75 million consoles were sold. As per Intellivision Entertainment (the current holders of the Intellivision brand) the final tally through 1990 is somewhere between 4.5 and 5 million consoles sold. In 2009, IGN ranked the Intellivision No. 14 on their list of the greatest video game consoles of all time. It remained Mattel's only video game console until the HyperScan in 2006. The Intellivision was developed at Mattel in Hawthorne, California, along with the Mattel Electronics line of handheld electronic games. Mattel's Design and Development group began investigating a home video game system in 1977. It was to have rich graphics and long lasting gameplay to distinguish itself from its competitors. Mattel identified a new but expensive chipset from National Semiconductor and negotiated better pricing for a simpler design. Its consultant, APh Technological Consulting, suggested a General Instrument chipset, listed as the Gimini programmable set in the GI 1977 catalog. The GI chipset lacked reprogrammable graphics and Mattel worked with GI to implement changes. GI published an updated chipset in its 1978 catalog. After having chosen National in August 1977, Mattel waited for two months before ultimately choosing the proposed GI chipset in late 1977. A team at Mattel, headed by David Chandler, began engineering the hardware, including the hand controllers. In 1978, David Rolfe of APh developed the onboard executive control software named Exec, and with a group of Caltech summer student employees programmed the first games. Graphics were designed by a group of artists at Mattel led by Dave James. The Intellivision was introduced at the 1979 Las Vegas CES in January as a modular home computer with the Master Component priced at US$165 and a soon-to-follow Keyboard Component also at $165 (equivalent to $670 in 2022). At Chicago CES in June, prices were revised to $250 for each component. A shortage of key chips from manufacturer General Instrument resulted in a limited number of Intellivision Master Components produced that year. In Fall 1979, Sylvania marketed its own branded Intellivision at $280 in its GTE stores at Philadelphia, Baltimore, and Washington, D.C. On December 3, Mattel delivered consoles to the Gottschalks department store chain headquartered in Fresno, California, with a suggested list price of $275. The Intellivision was also listed in the nationally distributed JCPenney Christmas 1979 catalog along with seven cartridges. It was in stores nationwide by mid-1980 with the pack-in game Las Vegas Poker & Blackjack and a library of ten cartridges. Mattel Electronics became a subsidiary in 1981. Though the Intellivision was not the first system to have challenged Warner Communications's Atari, it was the first to have posed a serious threat to the market leader. A series of advertisements starring George Plimpton used side-by-side game comparisons to demonstrate the superior graphics and sound of Intellivision over the Atari 2600. One slogan calls Intellivision "the closest thing to the real thing". One such example compared golf games; where the 2600's games had a blip sound and cruder graphics, the Intellivision featured a realistic swing sound and striking of the ball and a more 3D look. In 1980, Mattel sold out its 190,000 stock of Intellivision Master Components, along with one million cartridges. In 1981, more than one million Intellivision consoles were sold, more than five times the amount of the previous year. The Intellivision Master Component was branded and distributed by various companies. Before Mattel shifted manufacturing to Hong Kong, Mattel Intellivision consoles were manufactured by GTE Sylvania. GTE Sylvania Intellivision consoles were produced along with Mattel's, differing only by the brand name. The Sears Super Video Arcade, manufactured by Mattel in Hong Kong, has a restyled beige top cover and detachable controllers. Its default title screen lacks the "Mattel Electronics" captioning. In 1982, Radio Shack marketed the Tandyvision One, similar to the original console but with the gold plates replaced with more wood trim. In Japan, Intellivision consoles were branded for Bandai in 1982, and in Brazil there were Digimed and Digiplay consoles manufactured by Sharp in 1983. Inside every Intellivision console is 4K of ROM containing the Exec software. It provides two benefits: reusable code that can effectively make a 4K cartridge an 8K game, and a software framework for new programmers to develop games more easily and quickly. It also allows other programmers to more easily review and continue another's project. Under the supervision of David Rolfe at APh, and with graphics from Mattel artist Dave James, APh was able to quickly create the Intellivision launch game library using mostly summer students. The drawback is that to be flexible and handle many different types of games, the Exec runs less efficiently than a dedicated program. Intellivision games that leverage the Exec run at a 20 Hz frame rate instead of the 60 Hz frame rate for which the Intellivision was designed. Using the Exec framework is optional, but almost all Intellivision games released by Mattel Electronics use it and thus run at 20 Hz. The limited ROM space in the early years of Intellivision game releases also means there is no space for a computer player, so many early multiplayer games require two human players. Initially, all Intellivision games were programmed by an outside firm, APh Technological Consulting, with 19 cartridges produced before Christmas 1980. Once the Intellivision project became successful, software development was brought in-house. Mattel formed its own software development group and began hiring programmers. The original five members of that Intellivision team were Mike Minkoff, Rick Levine, John Sohl, Don Daglow, and manager Gabriel Baum. Levine and Minkoff, a long-time Mattel Toys veteran, both transferred from the hand-held Mattel game engineering team. During 1981, Mattel hired programmers as fast as possible. Early in 1982 Mattel Electronics relocated from Mattel headquarters to an unused industrial building. Offices were renovated as new staff moved in. To keep these programmers from being hired away by rival Atari, their identities and work location was kept a closely guarded secret. In public, the programmers were referred to collectively as the Blue Sky Rangers. Most of the early games are based on traditional real-world concepts such as sports, with an emphasis on realism and depth of play within the technology of the time. The Intellivision was not marketed as a toy; as such, games such as Sea Battle and B-17 Bomber are not made in the pick-up-and-play format like arcade games. Reading the instructions is often a prerequisite. Every cartridge produced by Mattel Electronics includes two plastic controller overlays to help navigate the 12-button keypad, although not every game uses it. Game series, or networks, are Major League Sports, Action, Strategy, Gaming, Children's Learning, and later Space Action and Arcade. The network concept was dropped in 1983, as was the convenient gate-fold style box for storing the cartridge, instructions, and overlays. Starting in 1981 programmers looking for credit and royalties on sales began leaving both APh and Mattel Electronics to create Intellivision cartridges for third-party publishers. They helped form Imagic in 1981, and in 1982 others joined Activision and Atari. Cheshire Engineering was formed by a few senior APh programmers including David Rolfe, author of the Exec, and Tom Loughry, creator of one of the most popular Intellivision games, Advanced Dungeons and Dragons. Cheshire created Intellivision games for Activision. Third-party developers Activision, Imagic, and Coleco started producing Intellivision cartridges in 1982, and Atari, Parker Brothers, Sega, and Interphase followed in 1983. The third-party developers, not having legal access to Exec knowledge, often bypassed the Exec framework to create smooth 30 Hz and 60 Hz Intellivision games such as The Dreadnaught Factor. Cheaper ROM prices also allowed for progressively larger games as 8K, 12K, and 16K cartridges became common. The first Mattel Electronics Intellivision game to run at 60 Hz is Masters of the Universe in 1983. Marketing dubbed the term "Super Graphics" on the game's packaging and marketing. Mattel Electronics had a competitive advantage in its team of experienced and talented programmers. As competitors often depended on licensing well known trademarks to sell video games, Mattel focused on original ideas. Don Daglow was a key early programmer at Mattel and became director of Intellivision game development. Daglow created Utopia, a precursor to the sim genre and, with Eddie Dombrower, the ground-breaking sports simulation World Series Major League Baseball. Daglow was also involved with the popular Intellivision games Tron Deadly Discs and Shark! Shark!. After Mattel Electronics closed in 1984, its programmers continued to make significant contributions to the videogame industry. Don Daglow and Eddie Dombrower went on to Electronic Arts to create Earl Weaver Baseball, and Don Daglow founded Stormfront Studios. Bill Fisher, Steve Roney, and Mike Breen founded Quicksilver Software, and David Warhol founded Realtime Associates. The Intellivision was designed as a modular home computer; so, from the beginning, its packaging, promotional materials, and television commercials promised the addition of a forthcoming accessory called the Keyboard Component. The Master Component was packaged as a stand-alone video game system to which the Keyboard Component could be added, providing the computer keyboard and tape drive. Not meant to be a hobbyist or business computer, the Intellivision home computer was meant to run pre-programmed software and bring "data flow" (Videotex) into the home. The Keyboard Component adds an 8-bit 6502 processor, making the Intellivision a dual-processor computer. It has 16K 10-bit shared RAM that can load and execute both Intellivision CP1610 and 6502 program code from tape, which is a large amount as typical contemporary cartridges are 4K. The cassettes have two tracks of digital data and two tracks of analog audio, completely controlled by the computer. Two tracks are read-only for the software, and two tracks are for user data. The tape drive is block addressed with high speed indexing. A high resolution 40×24 monochrome text display can overlay regular Intellivision graphics. There is a microphone port and two expansion ports for peripherals and RAM. The Microsoft BASIC programming cartridge uses one of these ports. Expanded memory cartridges support 1,000 pages of 8 KB each. A third pass-through cartridge port is for regular Intellivision cartridges. It uses the Intellivision's power supply. A 40-column thermal printer was available, and a telephone modem was planned along with voice synthesis and voice recognition. David Rolfe of APh wrote a control program for the Keyboard Component called PicSe (Picture Sequencer) specifically for the development of multimedia applications. PicSe synchronizes the graphics and analog audio while concurrently saving or loading tape data. Productivity software for home finances, personal improvement, and self education were planned. Subject experts were consulted and their voices recorded and used in the software. Three applications using the PicSe system were released on cassette tape: Conversational French, Jack Lalanne's Physical Conditioning, and Spelling Challenge. Programs written in BASIC do not have access to Intellivision graphics and were sold at a lower price. Five BASIC applications were released on tape: Family Budgeting, Geography Challenge, and Crosswords I, II, and III. The Keyboard Component was an ambitious piece of engineering for its time, and it was repeatedly delayed as engineers tried to reduce manufacturing costs. In August 1979, a breadboard form of the Component was successfully entered into the Sears Market Research Program. In December 1979, Mattel had production design working units but decided on a significant internal design change to consolidate circuit boards. In September 1980, it was test marketed in Fresno, California, but without software, except for the BASIC programming cartridge. In late 1981, design changes were finally implemented and the Keyboard Component was released at $600 (equivalent to $1,930 in 2022) in Seattle and New Orleans only. Customers who complained in writing could buy a Keyboard Component directly from Mattel. The printer, a rebadged Alphacom Sprinter 40, was only available by mail order. The keyboard component's repeated delays became so notorious around Mattel headquarters that comedian Jay Leno, when performing at Mattel's 1981 Christmas party, got his biggest response of the evening with the line: "You know what the three big lies are, don't you? 'The check is in the mail', 'I'll still respect you in the morning', and 'The keyboard will be out in spring.'" Complaints from consumers who had chosen to buy the Intellivision specifically on the promise of a "coming soon" personal-computer upgrade eventually caught the attention of the Federal Trade Commission (FTC), who started investigating Mattel Electronics for fraud and false advertising. In mid-1982, the FTC ordered Mattel to pay a monthly fine (said to be $10,000) until the promised computer upgrade was in full retail distribution. To end the ongoing fines, the Keyboard Component was officially canceled in August 1982 and the Entertainment Computer System (ECS) module offered in its place. Part of Mattel's settlement with the FTC involved offering to buy back all of the existing Keyboard Components from customers. Mattel provided a full refund, but customers without a receipt received $550 for the Keyboard Component, $60 for the BASIC cartridge, and $30 for each cassette software. Any customer who opted to keep the products was required to sign a waiver with the understanding that no more software would be written for the system and absolving Mattel of any future responsibility for technical support. They were also compensated with $1,000 worth of Mattel Electronics products. Though approximately 4,000 Keyboard Components were manufactured, it is not clear how many of them were sold and they are rare. Many of the units were dismantled for parts. Others were used by Mattel Electronics programmers as part of their development system. A Keyboard Component could be interfaced with an Intellivision development system in place of the hand-built Magus board RAM cartridge. Data transfer to the Keyboard Component RAM is done serially and is slower than the Magus board parallel interface. The keyboard component debacle was ranked as No. 11 on GameSpy's "25 dumbest moments in gaming". In mid-1981, Mattel's upper management was becoming concerned that the keyboard component division would never be able to produce a sellable product. As a result, Mattel Electronics set up a competing internal engineering team whose stated mission was to produce an inexpensive add-on called the "Basic Development System", or BDS, to be sold as an educational device to introduce kids to the concepts of computer programming. The rival BDS engineering group, who had to keep the project's real purpose a secret among themselves, fearing that if David Chandler, the head of the keyboard component team, found out about it he would use his influence to end the project, eventually came up with a much less expensive alternative. Originally dubbed the "Lucky", from LUCKI: Low User-Cost Keyboard Interface, it lacked many of the sophisticated features envisioned for the original keyboard component. Gone, for example, was the 16K (8MB max) of RAM, the secondary CPU, and high resolution text; instead, the ECS offered a mere 2KB RAM expansion, a built-in BASIC that was marginally functional, plus a much-simplified cassette and printer interface. Ultimately, this fulfilled the original promises of turning the Intellivision into a computer, making it possible to write programs and store them to tape as well as interfacing with a printer well enough to allow Mattel to claim that they had delivered the promised computer upgrade and stop the FTC's mounting fines. It even offered, via an additional sound chip (AY-3-8917) inside the ECS module and an optional 49-key music synthesizer keyboard, the possibility of turning the Intellivision into a multi-voice synthesizer which could be used to play or learn music. In the fall of 1982, the LUCKI, now renamed the Entertainment Computer System (ECS), was presented at the annual sales meeting, officially ending the ill-fated keyboard component project. A new advertising campaign was aired in time for the 1982 Christmas season, and the ECS itself was shown to the public at the January 1983 Consumer Electronics Show (CES) in Las Vegas. A few months later, the ECS hit the market, and the FTC agreed to drop the $10K per month fines. By the time the ECS made its retail debut as the Intellivision Computer Module, an internal shake-up at the top levels of Mattel Electronics' management had caused the company's focus to shift away from hardware add-ons in favor of software, and the ECS received very little in terms of furthering the marketing push. Further hardware developments, including a planned Program Expander that would have added another 16K of RAM and a more intricate, fully featured Extended-BASIC to the system, were halted. In the end, six games were released for the ECS; a few more were completed but not released. The ECS also offered four player game-play with the optional addition of two extra hand controllers. Four player games were in development when Mattel Electronics closed in 1984. World Cup Soccer was later completed and released in 1985 by Dextel in Europe and then INTV Corporation in North America. The documentation does not mention it but when the ECS Computer Adapter is used, World Cup Soccer can be played with one to four players, or two players cooperatively against the computer. In 1982 Mattel introduced the Intellivoice Voice Synthesis Module, a speech synthesizer for compatible cartridges. The Intellivoice was novel in two respects: human sounding male and female voices with distinct accents, and speech-supporting games designed with speech as an integral part of the gameplay. Like the Intellivision chipset, the Intellivoice chipset was developed by General Instrument. The SP0256-012 orator chip has 2KB ROM inside and is used to store the speech for numerical digits, some common words, and the phrase "Mattel Electronics presents". Speech can also be processed from the Intellivoice's SP650 buffer chip, stored and loaded from cartridge memory. That buffer chip has its own I/O and the Intellivoice has a 30-pin expansion port under a removable top plate. Mattel Electronics planned to use that connector for wireless hand controllers. Mattel Electronics built a state of the art voice processing lab to produce the phrases used in Intellivoice games. However, the amount of speech that could be compressed into an 8K or 12K cartridge and still leave room for a game was limited. Intellivoice cartridges Space Spartans and B-17 Bomber did sell about 300,000 copies each, priced a few dollars more than regular Intellivision cartridges. However, at $79 the Intellivoice did not sell as well as Mattel expected, and Intellivoices were later offered free with the purchase of a Master Component. In August 1983 the Intellivoice system was quietly phased out. A children's title called Magic Carousel and foreign-language versions of Space Spartans were completed but shelved. Additional games Woody Woodpecker and Space Shuttle went unfinished with the voice recordings unused. Four Intellivoice games were released: Space Spartans, B-17 Bomber, Bomb Squad, and Tron: Solar Sailer. A fifth game, Intellivision World Series Major League Baseball, developed as part of the Entertainment Computer System series, also supports the Intellivoice if both the ECS and Intellivoice are connected concurrently. Unlike the Intellivoice-specific games, however, World Series Major League Baseball is also playable without the Intellivoice module (but not without the ECS). In the spring of 1983, Mattel introduced the Intellivision II, a cheaper, more compact redesign of the original, that was designed to be less expensive to manufacture and service, with updated styling. It also had longer controller cords. The Intellivision II was initially released without a pack-in game but was later packaged with BurgerTime in the United States and Lock'N'Chase in Canada. In 1984, the Digiplay Intellivision II was introduced in Brazil. Brazil was the only country outside North America to have the redesigned Intellivision II. Using an external AC Adapter (16.2V AC), consolidating some ICs, and taking advantage of relaxed FCC emission standards, the Intellivision II has a significantly smaller footprint than the original. The controllers, now detachable, have a different feel, with plastic rather than rubber side buttons and a flat membrane keypad. Users of the original Intellivision missed the ability to find keypad buttons by the tactile feel of the original controller bubble keypad. One functional difference was the addition of a video input to the cartridge port, added specifically to support the System Changer, an accessory also released in 1983 by Mattel that played Atari 2600 cartridges through the Intellivision. The Intellivision hand controllers could be used to play Atari 2600 games. The System Changer also had two controller ports compatible with Atari joysticks. The original Intellivision required a hardware modification, a service provided by Mattel, to work with the System Changer. Otherwise the Intellivision II was promoted to be compatible with the original. It was discovered that a few Coleco Intellivision games did not work on the Intellivision II. Mattel secretly changed the Exec internal ROM program in an attempt to lock out third party games. A few of Coleco's early games were affected but the 3rd party developers quickly figured out how to get around it. Mattel's own Electric Company Word Fun, however, will not run on the Intellivision II due to this change. In an unrelated issue but also due to Exec changes, Super Pro Football experiences a minor glitch where the quarterback does not appear until after the ball is hiked. There were also some minor changes to the sound chip (AY-3-8914A/AY-3-8916) affecting sound effects in some games. Programmers at Mattel discovered the audio differences and avoided the problem in future games. As early as 1981 Dave Chandler's group began designing what would have been Mattel's next generation console, codenamed Decade and now referred to as the Intellivision IV. It would have been based on the 32-bit MC68000 processor and a 16-bit custom designed advanced graphic interface chip. Specifications called for dual display support, 240×192 bitmap resolution, 16 programmable 12-bit colors (4096 colors), antialiasing, 40×24 tiled graphics modes, four colors per tile (16 with shading), text layer and independent scrolling, 16 multicolored 16×16 sprites per scan-line, 32 level hardware sprite scaling. Line interrupts for reprogramming sprite and color registers would allow for many more sprites and colors on screen at the same time. It was intended as a machine that could lead Mattel Electronics into the 1990s, however on August 4, 1983, most hardware people at Mattel Electronics were laid off. In 1982, with new machines introduced by competitors, Mattel marketing wanted to bring an upgraded system to market sooner. The Intellivision III was to be an upgraded but backward-compatible system, based on a similar CP1610 processor and with an improved graphics STIC chip producing double the resolution with more sprites and colors. The Intellivision III never proceeded past the prototype stage; a new EXEC was written for it, but no games. It was cancelled in mid-1983. A Mattel document titled Target Specification Intellivision III has the following. According to the company's 1982 Form 10-K, Mattel had almost 20% of the domestic video-game market. Mattel Electronics provided 25% of revenue and 50% of operating income in fiscal 1982. Although the Atari 2600 had more third-party development, Creative Computing Video & Arcade Games reported after visiting the summer 1982 Consumer Electronics Show that "the momentum is tremendous". Activision and Imagic began releasing games for the Intellivision, as did hardware rival Coleco. Mattel created "M Network" branded games for Atari's system. The company's advertisement budget increased to over $20 million for the year. In its October 1982 stockholders' report Mattel announced that Electronics had, so far that year, posted a nearly $100 million profit on nearly $500 million sales; a threefold increase over October 1981. However, the same report predicted a loss for the upcoming quarter. Hiring still continued, as did the company's optimism that the investment in software and hardware development would pay off. The M Network brand expanded to personal computers. An office in Taiwan was opened to handle Apple II programming. The original five-person Mattel game development team had grown to 110 people under new vice president Baum, while Daglow led Intellivision development and top engineer Minkoff directed all work on all other platforms. In February 1983, Mattel Electronics opened an office in the south of France to provide European input to Intellivision games and develop games for the ColecoVision. At its peak Mattel Electronics employed 1800 people. Amid the flurry of new hardware and software development, there was trouble for the Intellivision. New game systems (ColecoVision and Atari 5200) introduced in 1982 took advantage of falling RAM prices to offer graphics closer to arcade quality. In 1983, the price of home computers, particularly the Commodore 64, came down drastically to compete with video game system sales. The market became flooded with hardware and software, and retailers were ill-equipped to cope. In spring 1983, hiring at Mattel Electronics came to a halt. At the June 1983 Consumer Electronics Show in Chicago, Mattel Electronics had the opportunity to show off all their new products. The response was underwhelming. Several people in top management positions were replaced due to massive losses. On July 12, 1983, Mattel Electronics President Josh Denham was replaced with outsider Mack Morris. Morris brought in former Mattel Electronics president and marketing director Jeff Rochlis as a consultant and all projects were under review. The Intellivision III was cancelled and then all new hardware development was stopped when 660 jobs were cut on August 4. The price of the Intellivision II (which launched at $150 earlier that year) was lowered to $69, and Mattel Electronics was to be a software company. However, by October 1983, Electronics' losses were over $280 million for the year and one third of the programming staff were laid off. Another third were gone by November, and, on January 20, 1984, the remaining programming staff were laid off. The Taiwan and French offices continued a little while longer due to contract and legal obligations. On February 4, 1984, Mattel sold the Intellivision business for $20 million. In 1983, 750,000 Intellivision Master Components were sold, compared to 1.8 million in 1982. Former Mattel Electronics Senior Vice President of Marketing, Terrence Valeski, understood that although losses were huge, the demand for video games increased in 1983. Valeski found investors and purchased the rights to Intellivision, the games, and inventory from Mattel. A new company, Intellivision Inc, was formed and by the end of 1984 Valeski bought out the other investors and changed the name to INTV Corporation. They continued to supply the large toy stores and sold games through direct mail order. At first they sold the existing inventory of games and Intellivision II systems. When the inventory of games sold out they produced more, but without the Mattel name or unnecessary licenses on the printed materials. To lower costs, the boxes, instructions, and overlays were produced at lower quality compared to Mattel. In France, the Mattel Electronics office found investors and became Nice Ideas in April 1984. They continued to work on Intellivision, Colecovision, and other computer games. They produced Intellivision World Cup Soccer and Championship Tennis, both released in 1985 by European publisher Dextel. In 1985, INTV Corporation introduced the INTV System III, also branded as the Intellivision Super Pro System, using the same design as the original Intellivision model but in black and silver. That same year INTV Corp introduced two new games that were completed at Mattel but not released: Thunder Castle and World Championship Baseball. With their early success INTV Corp decided to produce new games and in 1986 introduced Super Pro Football, an update of Mattel NFL Football. INTV Corp continued a relationship that Mattel had with Data East and produced all new cartridges such as Commando in 1987 and Body Slam Wrestling in 1988. Also in 1987, INTV Corp released Dig Dug, purchased from Atari where the game was completed but not released in 1984. They also got into producing next generation games with the production of Monster Truck Rally for Nintendo Entertainment System (NES) in 1991, also released as Stadium Mud Buggies for Intellivision in 1989. Licensing agreements with Nintendo and Sega required INTV Corporation to discontinue the Intellivision in 1990. INTV Corporation did publish 21 new Intellivision cartridges bringing the Intellivision library to a total of 124 cartridges plus one compilation cartridge. In 1989, INTV Corp and World Book Encyclopedia entered into an agreement to manufacture an educational video game system called Tutorvision. It is a modified Intellivision, the case molded in light beige with gold and blue trim. The Exec ROM expanded, system RAM increased to 1.75K, and graphics RAM increased to 2KB. That is enough graphics RAM to define unique graphic tiles for the entire screen. Games were designed by World Book, J. Hakansson Associates, and programmed by Realtime Associates. Sixteen games were in production, plus one Canadian variation. However, the cartridges and the Tutorvision were never released; instead World Book and INTV Corporation sued each other. In 1990, INTV Corporation filed for bankruptcy protection and closed in 1991. An unknown number of later Intellivision SuperPro systems have Tutorvision hardware inside. A subset of these units contain the full Tutorvision EXEC and can play Tutorvision games. Intellivision, Super Video Arcade, Tandyvision One, Intellivision II, INTV System III, Super Pro System The Intellivision controller features: The directional pad was called a "control disc" and marketed as having the "functionality of both a joystick and a paddle". The controller was ranked the fourth worst video game controller by IGN editor Craig Harris. A July 1980 article in Video magazine said "Now, arcade addicts can revel in the most sophisticated games this side of the complex simulations designed for high-level computers right in their own livingrooms.", "It may not be perfect but it's certainly the best unit offered so far to players of electronic video games.", "Those used to joysticks will have to endure a short period of adjustment, but even finicky players will be forced to agree that the company has developed a truly elegant solution to the controller problem." Ken Uston published Ken Uston's Guide to Buying and Beating the Home Video Games in 1982 as a guide to potential buyers of console systems/cartridges, as well as a brief strategy guide to numerous cartridge games then in existence. He described Intellivision as "the most mechanically reliable of the systems… The controller (used during "many hours of experimentation") worked with perfect consistency. The unit never had overheating problems, nor were loose wires or other connections encountered." However, Uston rated the controls and control system as "below average" and the worst of the consoles he tested (including Atari 2600, Magnavox Odyssey², Astrovision, and Fairchild Channel F). Jeff Rovin lists Intellivision as one of the seven major suppliers of videogames in 1982 and mentions it as "the unchallenged king of graphics", but says the controllers can be "difficult to operate", mentions the fact that if a controller breaks the entire unit must be shipped off for repairs (since they did not detach at first), and explains that the overlays "are sometimes so stubborn as to tempt one's patience" . A 1996 article in Next Generation said the Intellivision "had greater graphics power than the dominant Atari 2600. It was slower than the 2600 and had less software available, but it was known for its superior sports titles." A year later, Electronic Gaming Monthly assessed the Intellivision in an overview of older gaming consoles, remarking that the controllers "were as comfortable as they were practical. The unique disk-shaped directional pad provided unprecedented control for the time, and the numeric keypad opened up new options previously unavailable in console gaming." They praised the breadth of the software library but said there was a lack of genuinely stand-out games. Intellivision games became readily available again when Keith Robinson and Stephen Roney, both former Intellivision programmers at Mattel Electronics, obtained exclusive rights to the Intellivision and games in 1997. That year they formed a new company, Intellivision Productions, and made Intellivision for PC Volume 1 available as a free download. Intellivision games could be played on a modern computer for the first time. That download includes three Intellivision games and an MS-DOS Intellivision emulator that plays original game code. It was followed by Volume 2 and another three games including Deep Pockets Super Pro Pool & Billiards; a game completed in 1990 but never released until this download in 1997. In 2000 the Intellipack 3 download was available with another four Intellivision games and emulators for Windows or Macintosh. Intellivision Productions released Intellivision Lives! and Intellivision Rocks on compact disc in 1998 and 2001. These compilation CDs play the original game code through emulators for MS-DOS, Windows, and Macintosh computers. Together they have over 100 Intellivision games including never before released King of the Mountain, Takeover, Robot Rubble, League of Light, and others. Intellivision Rocks includes Intellivision games made by Activision and Imagic. Some games could not be included due to licensing, others simply used different titles to avoid trademarked names. The CDs are also a resource for development history, box art, hidden features, programmer biographies, video interviews, and original commercials. Also in 1997 Intellivision Productions announced they would sell development tools allowing customers to program their own Intellivision games. They were to provide documentation, PC compatible cross-assemblers, and the Magus II PC Intellivision cartridge interface. Unfortunately, the project was cancelled but they did provide copies of "Your Friend the EXEC", the programmers guide to the Intellivision Executive control software. By 2000 Intellivision hobbyists ultimately created their own development tools, including Intellivision memory cartridges. In 2005 Intellivision Productions announced that new Intellivision cartridges were to be produced. "Deep Pockets and Illusions will be the first two releases in a series of new cartridges for the Intellivision. The printed circuit boards, the cartridge casings, the boxes are all being custom manufactured for this special series." Illusions was completed at Mattel Electronics' French office in 1983 but never released. Deep Pockets Super Pro Pool & Billiards was programmed for INTV Corporation in 1990 and only released as a ROM file in 1998. However, no cartridges were produced. Previously, in 2000, Intellivision Productions did release new cartridges for the Atari 2600 and Colecovision. Sea Battle and Swordfight were Atari 2600 games created by Mattel Electronics in the early 1980s but not previously released. Steamroller (Colecovision) was developed for Activision in 1984 and not previously released. Also in 1999, Activision released A Collection of Intellivision Classic Games for PlayStation. Also known as Intellivision Classics, it has 30 emulated Intellivision games as well as video interviews of some of the original programmers. All of the games were licensed from Intellivision Productions and none of the Activision or Imagic Intellivision games were included. In 2003, Crave Entertainment released a PlayStation 2 version of Intellivision Lives! and then Xbox and GameCube version in 2004. In 2010, Virtual Play Games released Intellivision Lives! for the Nintendo DS including one never before released game, Blow Out. In 2008 Microsoft made Intellivision Lives! an available download on the Xbox Live Marketplace as an Xbox Original and playable on the Xbox 360. In 2003, the Intellivision 25 and Intellivision 15 direct-to-TV systems were released by Techno Source Ltd. These are an all-in-one single controller design that plugs directly into a television. One includes 25 games the other ten. These Intellivision games were not emulated but rewritten for the native processor (NES-based hardware) and adapted to a contemporary controller. As such they look and play differently than Intellivision. In 2005 they were updated for two-player play as the Intellivision X2 with 15 games. They were commercially very successful altogether selling about 4 million units by end of 2006. Several licensed Intellivision games became available to Windows computers through the GameTap subscription gaming service in 2005 including Astrosmash, Buzz Bombers, Hover Force, Night Stalker, Pinball, Shark! Shark!, Skiing and Snafu. Installation of the GameTap Player software was required to access the emulator and games. The VH1 Online Arcade made nine Intellivision games available in 2007. Using a Shockwave emulator these Intellivision games could be played directly through a web browser with Shockwave Player. In 2010, VH1 Classic and MTV Networks released 6 Intellivision games to iOS. Intellivision games were first adapted to mobile phones and published by THQ Wireless in 2001. On March 24, 2010, Microsoft launched the Game Room service for Xbox Live and Games for Windows Live. This service includes support for Intellivision games and allows players to compete for high scores via online leaderboards. At the 2011 Consumer Electronics Show, Microsoft announced a version of Game Room for Windows Phone, promising a catalog of 44 Intellivision games. AtGames and its Direct2Drive digital store has Windows compatible Intellivision compilations available for download purchase. The number of Intellivision games that can be played effectively with contemporary game controllers is limited. On October 1, 2014, AtGames Digital Media, Inc., under license from Intellivision Productions, Inc., released the Intellivision Flashback classic game console. It is a miniature sized Intellivision console with two original sized Intellivision controllers. While adapters have been available to interface original Intellivision controllers to personal computers, the Intellivision Flashback includes two new Intellivision controllers identical in layout and function to the originals. It comes with 60 (61 at Dollar General) emulated Intellivision games built into ROM and a sample set of plastic overlays for 10 games. The Advanced Dungeons & Dragons games were included as Crown of Kings and Minotaur. As with many of the other Intellivision compilations, no games requiring third party licensing were included. In May 2018, Tommy Tallarico announced that he acquired the rights to the Intellivision brand and games with plans to launch a new home video game console. A new company, Intellivision Entertainment, was formed with Tallarico serving as president. Intellivision Productions has been renamed Blue Sky Rangers Inc. and their video game intellectual property has been transferred to Intellivision Entertainment.
[ { "paragraph_id": 0, "text": "The Intellivision is a home video game console released by Mattel Electronics in 1979. The name is a portmanteau of \"intelligent television\". Development began in 1977, the same year as the launch of its main competitor, the Atari 2600. In 1984, Mattel sold its video game assets to a former Mattel Electronics executive and investors, eventually becoming INTV Corporation. Game development ran from 1978 to 1990, when the Intellivision was discontinued. From 1980 to 1983, more than 3.75 million consoles were sold. As per Intellivision Entertainment (the current holders of the Intellivision brand) the final tally through 1990 is somewhere between 4.5 and 5 million consoles sold.", "title": "" }, { "paragraph_id": 1, "text": "In 2009, IGN ranked the Intellivision No. 14 on their list of the greatest video game consoles of all time. It remained Mattel's only video game console until the HyperScan in 2006.", "title": "" }, { "paragraph_id": 2, "text": "The Intellivision was developed at Mattel in Hawthorne, California, along with the Mattel Electronics line of handheld electronic games. Mattel's Design and Development group began investigating a home video game system in 1977. It was to have rich graphics and long lasting gameplay to distinguish itself from its competitors. Mattel identified a new but expensive chipset from National Semiconductor and negotiated better pricing for a simpler design. Its consultant, APh Technological Consulting, suggested a General Instrument chipset, listed as the Gimini programmable set in the GI 1977 catalog. The GI chipset lacked reprogrammable graphics and Mattel worked with GI to implement changes. GI published an updated chipset in its 1978 catalog. After having chosen National in August 1977, Mattel waited for two months before ultimately choosing the proposed GI chipset in late 1977. A team at Mattel, headed by David Chandler, began engineering the hardware, including the hand controllers. In 1978, David Rolfe of APh developed the onboard executive control software named Exec, and with a group of Caltech summer student employees programmed the first games. Graphics were designed by a group of artists at Mattel led by Dave James.", "title": "History" }, { "paragraph_id": 3, "text": "The Intellivision was introduced at the 1979 Las Vegas CES in January as a modular home computer with the Master Component priced at US$165 and a soon-to-follow Keyboard Component also at $165 (equivalent to $670 in 2022). At Chicago CES in June, prices were revised to $250 for each component. A shortage of key chips from manufacturer General Instrument resulted in a limited number of Intellivision Master Components produced that year. In Fall 1979, Sylvania marketed its own branded Intellivision at $280 in its GTE stores at Philadelphia, Baltimore, and Washington, D.C. On December 3, Mattel delivered consoles to the Gottschalks department store chain headquartered in Fresno, California, with a suggested list price of $275. The Intellivision was also listed in the nationally distributed JCPenney Christmas 1979 catalog along with seven cartridges. It was in stores nationwide by mid-1980 with the pack-in game Las Vegas Poker & Blackjack and a library of ten cartridges. Mattel Electronics became a subsidiary in 1981.", "title": "History" }, { "paragraph_id": 4, "text": "Though the Intellivision was not the first system to have challenged Warner Communications's Atari, it was the first to have posed a serious threat to the market leader. A series of advertisements starring George Plimpton used side-by-side game comparisons to demonstrate the superior graphics and sound of Intellivision over the Atari 2600. One slogan calls Intellivision \"the closest thing to the real thing\". One such example compared golf games; where the 2600's games had a blip sound and cruder graphics, the Intellivision featured a realistic swing sound and striking of the ball and a more 3D look. In 1980, Mattel sold out its 190,000 stock of Intellivision Master Components, along with one million cartridges. In 1981, more than one million Intellivision consoles were sold, more than five times the amount of the previous year.", "title": "History" }, { "paragraph_id": 5, "text": "The Intellivision Master Component was branded and distributed by various companies. Before Mattel shifted manufacturing to Hong Kong, Mattel Intellivision consoles were manufactured by GTE Sylvania. GTE Sylvania Intellivision consoles were produced along with Mattel's, differing only by the brand name. The Sears Super Video Arcade, manufactured by Mattel in Hong Kong, has a restyled beige top cover and detachable controllers. Its default title screen lacks the \"Mattel Electronics\" captioning. In 1982, Radio Shack marketed the Tandyvision One, similar to the original console but with the gold plates replaced with more wood trim. In Japan, Intellivision consoles were branded for Bandai in 1982, and in Brazil there were Digimed and Digiplay consoles manufactured by Sharp in 1983.", "title": "History" }, { "paragraph_id": 6, "text": "Inside every Intellivision console is 4K of ROM containing the Exec software. It provides two benefits: reusable code that can effectively make a 4K cartridge an 8K game, and a software framework for new programmers to develop games more easily and quickly. It also allows other programmers to more easily review and continue another's project. Under the supervision of David Rolfe at APh, and with graphics from Mattel artist Dave James, APh was able to quickly create the Intellivision launch game library using mostly summer students. The drawback is that to be flexible and handle many different types of games, the Exec runs less efficiently than a dedicated program. Intellivision games that leverage the Exec run at a 20 Hz frame rate instead of the 60 Hz frame rate for which the Intellivision was designed. Using the Exec framework is optional, but almost all Intellivision games released by Mattel Electronics use it and thus run at 20 Hz. The limited ROM space in the early years of Intellivision game releases also means there is no space for a computer player, so many early multiplayer games require two human players.", "title": "History" }, { "paragraph_id": 7, "text": "Initially, all Intellivision games were programmed by an outside firm, APh Technological Consulting, with 19 cartridges produced before Christmas 1980. Once the Intellivision project became successful, software development was brought in-house. Mattel formed its own software development group and began hiring programmers. The original five members of that Intellivision team were Mike Minkoff, Rick Levine, John Sohl, Don Daglow, and manager Gabriel Baum. Levine and Minkoff, a long-time Mattel Toys veteran, both transferred from the hand-held Mattel game engineering team. During 1981, Mattel hired programmers as fast as possible. Early in 1982 Mattel Electronics relocated from Mattel headquarters to an unused industrial building. Offices were renovated as new staff moved in. To keep these programmers from being hired away by rival Atari, their identities and work location was kept a closely guarded secret. In public, the programmers were referred to collectively as the Blue Sky Rangers.", "title": "History" }, { "paragraph_id": 8, "text": "Most of the early games are based on traditional real-world concepts such as sports, with an emphasis on realism and depth of play within the technology of the time. The Intellivision was not marketed as a toy; as such, games such as Sea Battle and B-17 Bomber are not made in the pick-up-and-play format like arcade games. Reading the instructions is often a prerequisite. Every cartridge produced by Mattel Electronics includes two plastic controller overlays to help navigate the 12-button keypad, although not every game uses it. Game series, or networks, are Major League Sports, Action, Strategy, Gaming, Children's Learning, and later Space Action and Arcade. The network concept was dropped in 1983, as was the convenient gate-fold style box for storing the cartridge, instructions, and overlays.", "title": "History" }, { "paragraph_id": 9, "text": "Starting in 1981 programmers looking for credit and royalties on sales began leaving both APh and Mattel Electronics to create Intellivision cartridges for third-party publishers. They helped form Imagic in 1981, and in 1982 others joined Activision and Atari. Cheshire Engineering was formed by a few senior APh programmers including David Rolfe, author of the Exec, and Tom Loughry, creator of one of the most popular Intellivision games, Advanced Dungeons and Dragons. Cheshire created Intellivision games for Activision. Third-party developers Activision, Imagic, and Coleco started producing Intellivision cartridges in 1982, and Atari, Parker Brothers, Sega, and Interphase followed in 1983. The third-party developers, not having legal access to Exec knowledge, often bypassed the Exec framework to create smooth 30 Hz and 60 Hz Intellivision games such as The Dreadnaught Factor. Cheaper ROM prices also allowed for progressively larger games as 8K, 12K, and 16K cartridges became common. The first Mattel Electronics Intellivision game to run at 60 Hz is Masters of the Universe in 1983. Marketing dubbed the term \"Super Graphics\" on the game's packaging and marketing.", "title": "History" }, { "paragraph_id": 10, "text": "Mattel Electronics had a competitive advantage in its team of experienced and talented programmers. As competitors often depended on licensing well known trademarks to sell video games, Mattel focused on original ideas. Don Daglow was a key early programmer at Mattel and became director of Intellivision game development. Daglow created Utopia, a precursor to the sim genre and, with Eddie Dombrower, the ground-breaking sports simulation World Series Major League Baseball. Daglow was also involved with the popular Intellivision games Tron Deadly Discs and Shark! Shark!. After Mattel Electronics closed in 1984, its programmers continued to make significant contributions to the videogame industry. Don Daglow and Eddie Dombrower went on to Electronic Arts to create Earl Weaver Baseball, and Don Daglow founded Stormfront Studios. Bill Fisher, Steve Roney, and Mike Breen founded Quicksilver Software, and David Warhol founded Realtime Associates.", "title": "History" }, { "paragraph_id": 11, "text": "The Intellivision was designed as a modular home computer; so, from the beginning, its packaging, promotional materials, and television commercials promised the addition of a forthcoming accessory called the Keyboard Component. The Master Component was packaged as a stand-alone video game system to which the Keyboard Component could be added, providing the computer keyboard and tape drive. Not meant to be a hobbyist or business computer, the Intellivision home computer was meant to run pre-programmed software and bring \"data flow\" (Videotex) into the home.", "title": "History" }, { "paragraph_id": 12, "text": "The Keyboard Component adds an 8-bit 6502 processor, making the Intellivision a dual-processor computer. It has 16K 10-bit shared RAM that can load and execute both Intellivision CP1610 and 6502 program code from tape, which is a large amount as typical contemporary cartridges are 4K. The cassettes have two tracks of digital data and two tracks of analog audio, completely controlled by the computer. Two tracks are read-only for the software, and two tracks are for user data. The tape drive is block addressed with high speed indexing. A high resolution 40×24 monochrome text display can overlay regular Intellivision graphics. There is a microphone port and two expansion ports for peripherals and RAM. The Microsoft BASIC programming cartridge uses one of these ports. Expanded memory cartridges support 1,000 pages of 8 KB each. A third pass-through cartridge port is for regular Intellivision cartridges. It uses the Intellivision's power supply. A 40-column thermal printer was available, and a telephone modem was planned along with voice synthesis and voice recognition.", "title": "History" }, { "paragraph_id": 13, "text": "David Rolfe of APh wrote a control program for the Keyboard Component called PicSe (Picture Sequencer) specifically for the development of multimedia applications. PicSe synchronizes the graphics and analog audio while concurrently saving or loading tape data. Productivity software for home finances, personal improvement, and self education were planned. Subject experts were consulted and their voices recorded and used in the software.", "title": "History" }, { "paragraph_id": 14, "text": "Three applications using the PicSe system were released on cassette tape: Conversational French, Jack Lalanne's Physical Conditioning, and Spelling Challenge.", "title": "History" }, { "paragraph_id": 15, "text": "Programs written in BASIC do not have access to Intellivision graphics and were sold at a lower price. Five BASIC applications were released on tape: Family Budgeting, Geography Challenge, and Crosswords I, II, and III.", "title": "History" }, { "paragraph_id": 16, "text": "The Keyboard Component was an ambitious piece of engineering for its time, and it was repeatedly delayed as engineers tried to reduce manufacturing costs. In August 1979, a breadboard form of the Component was successfully entered into the Sears Market Research Program. In December 1979, Mattel had production design working units but decided on a significant internal design change to consolidate circuit boards. In September 1980, it was test marketed in Fresno, California, but without software, except for the BASIC programming cartridge. In late 1981, design changes were finally implemented and the Keyboard Component was released at $600 (equivalent to $1,930 in 2022) in Seattle and New Orleans only. Customers who complained in writing could buy a Keyboard Component directly from Mattel. The printer, a rebadged Alphacom Sprinter 40, was only available by mail order.", "title": "History" }, { "paragraph_id": 17, "text": "The keyboard component's repeated delays became so notorious around Mattel headquarters that comedian Jay Leno, when performing at Mattel's 1981 Christmas party, got his biggest response of the evening with the line: \"You know what the three big lies are, don't you? 'The check is in the mail', 'I'll still respect you in the morning', and 'The keyboard will be out in spring.'\"", "title": "History" }, { "paragraph_id": 18, "text": "Complaints from consumers who had chosen to buy the Intellivision specifically on the promise of a \"coming soon\" personal-computer upgrade eventually caught the attention of the Federal Trade Commission (FTC), who started investigating Mattel Electronics for fraud and false advertising. In mid-1982, the FTC ordered Mattel to pay a monthly fine (said to be $10,000) until the promised computer upgrade was in full retail distribution. To end the ongoing fines, the Keyboard Component was officially canceled in August 1982 and the Entertainment Computer System (ECS) module offered in its place. Part of Mattel's settlement with the FTC involved offering to buy back all of the existing Keyboard Components from customers. Mattel provided a full refund, but customers without a receipt received $550 for the Keyboard Component, $60 for the BASIC cartridge, and $30 for each cassette software. Any customer who opted to keep the products was required to sign a waiver with the understanding that no more software would be written for the system and absolving Mattel of any future responsibility for technical support. They were also compensated with $1,000 worth of Mattel Electronics products.", "title": "History" }, { "paragraph_id": 19, "text": "Though approximately 4,000 Keyboard Components were manufactured, it is not clear how many of them were sold and they are rare. Many of the units were dismantled for parts. Others were used by Mattel Electronics programmers as part of their development system. A Keyboard Component could be interfaced with an Intellivision development system in place of the hand-built Magus board RAM cartridge. Data transfer to the Keyboard Component RAM is done serially and is slower than the Magus board parallel interface.", "title": "History" }, { "paragraph_id": 20, "text": "The keyboard component debacle was ranked as No. 11 on GameSpy's \"25 dumbest moments in gaming\".", "title": "History" }, { "paragraph_id": 21, "text": "In mid-1981, Mattel's upper management was becoming concerned that the keyboard component division would never be able to produce a sellable product. As a result, Mattel Electronics set up a competing internal engineering team whose stated mission was to produce an inexpensive add-on called the \"Basic Development System\", or BDS, to be sold as an educational device to introduce kids to the concepts of computer programming.", "title": "History" }, { "paragraph_id": 22, "text": "The rival BDS engineering group, who had to keep the project's real purpose a secret among themselves, fearing that if David Chandler, the head of the keyboard component team, found out about it he would use his influence to end the project, eventually came up with a much less expensive alternative. Originally dubbed the \"Lucky\", from LUCKI: Low User-Cost Keyboard Interface, it lacked many of the sophisticated features envisioned for the original keyboard component. Gone, for example, was the 16K (8MB max) of RAM, the secondary CPU, and high resolution text; instead, the ECS offered a mere 2KB RAM expansion, a built-in BASIC that was marginally functional, plus a much-simplified cassette and printer interface.", "title": "History" }, { "paragraph_id": 23, "text": "Ultimately, this fulfilled the original promises of turning the Intellivision into a computer, making it possible to write programs and store them to tape as well as interfacing with a printer well enough to allow Mattel to claim that they had delivered the promised computer upgrade and stop the FTC's mounting fines. It even offered, via an additional sound chip (AY-3-8917) inside the ECS module and an optional 49-key music synthesizer keyboard, the possibility of turning the Intellivision into a multi-voice synthesizer which could be used to play or learn music.", "title": "History" }, { "paragraph_id": 24, "text": "In the fall of 1982, the LUCKI, now renamed the Entertainment Computer System (ECS), was presented at the annual sales meeting, officially ending the ill-fated keyboard component project. A new advertising campaign was aired in time for the 1982 Christmas season, and the ECS itself was shown to the public at the January 1983 Consumer Electronics Show (CES) in Las Vegas. A few months later, the ECS hit the market, and the FTC agreed to drop the $10K per month fines.", "title": "History" }, { "paragraph_id": 25, "text": "By the time the ECS made its retail debut as the Intellivision Computer Module, an internal shake-up at the top levels of Mattel Electronics' management had caused the company's focus to shift away from hardware add-ons in favor of software, and the ECS received very little in terms of furthering the marketing push. Further hardware developments, including a planned Program Expander that would have added another 16K of RAM and a more intricate, fully featured Extended-BASIC to the system, were halted. In the end, six games were released for the ECS; a few more were completed but not released.", "title": "History" }, { "paragraph_id": 26, "text": "The ECS also offered four player game-play with the optional addition of two extra hand controllers. Four player games were in development when Mattel Electronics closed in 1984. World Cup Soccer was later completed and released in 1985 by Dextel in Europe and then INTV Corporation in North America. The documentation does not mention it but when the ECS Computer Adapter is used, World Cup Soccer can be played with one to four players, or two players cooperatively against the computer.", "title": "History" }, { "paragraph_id": 27, "text": "In 1982 Mattel introduced the Intellivoice Voice Synthesis Module, a speech synthesizer for compatible cartridges. The Intellivoice was novel in two respects: human sounding male and female voices with distinct accents, and speech-supporting games designed with speech as an integral part of the gameplay.", "title": "History" }, { "paragraph_id": 28, "text": "Like the Intellivision chipset, the Intellivoice chipset was developed by General Instrument. The SP0256-012 orator chip has 2KB ROM inside and is used to store the speech for numerical digits, some common words, and the phrase \"Mattel Electronics presents\". Speech can also be processed from the Intellivoice's SP650 buffer chip, stored and loaded from cartridge memory. That buffer chip has its own I/O and the Intellivoice has a 30-pin expansion port under a removable top plate. Mattel Electronics planned to use that connector for wireless hand controllers.", "title": "History" }, { "paragraph_id": 29, "text": "Mattel Electronics built a state of the art voice processing lab to produce the phrases used in Intellivoice games. However, the amount of speech that could be compressed into an 8K or 12K cartridge and still leave room for a game was limited. Intellivoice cartridges Space Spartans and B-17 Bomber did sell about 300,000 copies each, priced a few dollars more than regular Intellivision cartridges. However, at $79 the Intellivoice did not sell as well as Mattel expected, and Intellivoices were later offered free with the purchase of a Master Component. In August 1983 the Intellivoice system was quietly phased out. A children's title called Magic Carousel and foreign-language versions of Space Spartans were completed but shelved. Additional games Woody Woodpecker and Space Shuttle went unfinished with the voice recordings unused.", "title": "History" }, { "paragraph_id": 30, "text": "Four Intellivoice games were released: Space Spartans, B-17 Bomber, Bomb Squad, and Tron: Solar Sailer.", "title": "History" }, { "paragraph_id": 31, "text": "A fifth game, Intellivision World Series Major League Baseball, developed as part of the Entertainment Computer System series, also supports the Intellivoice if both the ECS and Intellivoice are connected concurrently. Unlike the Intellivoice-specific games, however, World Series Major League Baseball is also playable without the Intellivoice module (but not without the ECS).", "title": "History" }, { "paragraph_id": 32, "text": "In the spring of 1983, Mattel introduced the Intellivision II, a cheaper, more compact redesign of the original, that was designed to be less expensive to manufacture and service, with updated styling. It also had longer controller cords. The Intellivision II was initially released without a pack-in game but was later packaged with BurgerTime in the United States and Lock'N'Chase in Canada. In 1984, the Digiplay Intellivision II was introduced in Brazil. Brazil was the only country outside North America to have the redesigned Intellivision II.", "title": "History" }, { "paragraph_id": 33, "text": "Using an external AC Adapter (16.2V AC), consolidating some ICs, and taking advantage of relaxed FCC emission standards, the Intellivision II has a significantly smaller footprint than the original. The controllers, now detachable, have a different feel, with plastic rather than rubber side buttons and a flat membrane keypad. Users of the original Intellivision missed the ability to find keypad buttons by the tactile feel of the original controller bubble keypad.", "title": "History" }, { "paragraph_id": 34, "text": "One functional difference was the addition of a video input to the cartridge port, added specifically to support the System Changer, an accessory also released in 1983 by Mattel that played Atari 2600 cartridges through the Intellivision. The Intellivision hand controllers could be used to play Atari 2600 games. The System Changer also had two controller ports compatible with Atari joysticks. The original Intellivision required a hardware modification, a service provided by Mattel, to work with the System Changer. Otherwise the Intellivision II was promoted to be compatible with the original.", "title": "History" }, { "paragraph_id": 35, "text": "It was discovered that a few Coleco Intellivision games did not work on the Intellivision II. Mattel secretly changed the Exec internal ROM program in an attempt to lock out third party games. A few of Coleco's early games were affected but the 3rd party developers quickly figured out how to get around it. Mattel's own Electric Company Word Fun, however, will not run on the Intellivision II due to this change. In an unrelated issue but also due to Exec changes, Super Pro Football experiences a minor glitch where the quarterback does not appear until after the ball is hiked. There were also some minor changes to the sound chip (AY-3-8914A/AY-3-8916) affecting sound effects in some games. Programmers at Mattel discovered the audio differences and avoided the problem in future games.", "title": "History" }, { "paragraph_id": 36, "text": "As early as 1981 Dave Chandler's group began designing what would have been Mattel's next generation console, codenamed Decade and now referred to as the Intellivision IV. It would have been based on the 32-bit MC68000 processor and a 16-bit custom designed advanced graphic interface chip. Specifications called for dual display support, 240×192 bitmap resolution, 16 programmable 12-bit colors (4096 colors), antialiasing, 40×24 tiled graphics modes, four colors per tile (16 with shading), text layer and independent scrolling, 16 multicolored 16×16 sprites per scan-line, 32 level hardware sprite scaling. Line interrupts for reprogramming sprite and color registers would allow for many more sprites and colors on screen at the same time. It was intended as a machine that could lead Mattel Electronics into the 1990s, however on August 4, 1983, most hardware people at Mattel Electronics were laid off.", "title": "History" }, { "paragraph_id": 37, "text": "In 1982, with new machines introduced by competitors, Mattel marketing wanted to bring an upgraded system to market sooner. The Intellivision III was to be an upgraded but backward-compatible system, based on a similar CP1610 processor and with an improved graphics STIC chip producing double the resolution with more sprites and colors. The Intellivision III never proceeded past the prototype stage; a new EXEC was written for it, but no games. It was cancelled in mid-1983. A Mattel document titled Target Specification Intellivision III has the following.", "title": "History" }, { "paragraph_id": 38, "text": "According to the company's 1982 Form 10-K, Mattel had almost 20% of the domestic video-game market. Mattel Electronics provided 25% of revenue and 50% of operating income in fiscal 1982. Although the Atari 2600 had more third-party development, Creative Computing Video & Arcade Games reported after visiting the summer 1982 Consumer Electronics Show that \"the momentum is tremendous\". Activision and Imagic began releasing games for the Intellivision, as did hardware rival Coleco. Mattel created \"M Network\" branded games for Atari's system. The company's advertisement budget increased to over $20 million for the year. In its October 1982 stockholders' report Mattel announced that Electronics had, so far that year, posted a nearly $100 million profit on nearly $500 million sales; a threefold increase over October 1981.", "title": "History" }, { "paragraph_id": 39, "text": "However, the same report predicted a loss for the upcoming quarter. Hiring still continued, as did the company's optimism that the investment in software and hardware development would pay off. The M Network brand expanded to personal computers. An office in Taiwan was opened to handle Apple II programming. The original five-person Mattel game development team had grown to 110 people under new vice president Baum, while Daglow led Intellivision development and top engineer Minkoff directed all work on all other platforms. In February 1983, Mattel Electronics opened an office in the south of France to provide European input to Intellivision games and develop games for the ColecoVision. At its peak Mattel Electronics employed 1800 people.", "title": "History" }, { "paragraph_id": 40, "text": "Amid the flurry of new hardware and software development, there was trouble for the Intellivision. New game systems (ColecoVision and Atari 5200) introduced in 1982 took advantage of falling RAM prices to offer graphics closer to arcade quality. In 1983, the price of home computers, particularly the Commodore 64, came down drastically to compete with video game system sales. The market became flooded with hardware and software, and retailers were ill-equipped to cope. In spring 1983, hiring at Mattel Electronics came to a halt.", "title": "History" }, { "paragraph_id": 41, "text": "At the June 1983 Consumer Electronics Show in Chicago, Mattel Electronics had the opportunity to show off all their new products. The response was underwhelming. Several people in top management positions were replaced due to massive losses. On July 12, 1983, Mattel Electronics President Josh Denham was replaced with outsider Mack Morris. Morris brought in former Mattel Electronics president and marketing director Jeff Rochlis as a consultant and all projects were under review. The Intellivision III was cancelled and then all new hardware development was stopped when 660 jobs were cut on August 4. The price of the Intellivision II (which launched at $150 earlier that year) was lowered to $69, and Mattel Electronics was to be a software company. However, by October 1983, Electronics' losses were over $280 million for the year and one third of the programming staff were laid off. Another third were gone by November, and, on January 20, 1984, the remaining programming staff were laid off. The Taiwan and French offices continued a little while longer due to contract and legal obligations. On February 4, 1984, Mattel sold the Intellivision business for $20 million. In 1983, 750,000 Intellivision Master Components were sold, compared to 1.8 million in 1982.", "title": "History" }, { "paragraph_id": 42, "text": "Former Mattel Electronics Senior Vice President of Marketing, Terrence Valeski, understood that although losses were huge, the demand for video games increased in 1983. Valeski found investors and purchased the rights to Intellivision, the games, and inventory from Mattel. A new company, Intellivision Inc, was formed and by the end of 1984 Valeski bought out the other investors and changed the name to INTV Corporation. They continued to supply the large toy stores and sold games through direct mail order. At first they sold the existing inventory of games and Intellivision II systems. When the inventory of games sold out they produced more, but without the Mattel name or unnecessary licenses on the printed materials. To lower costs, the boxes, instructions, and overlays were produced at lower quality compared to Mattel.", "title": "History" }, { "paragraph_id": 43, "text": "In France, the Mattel Electronics office found investors and became Nice Ideas in April 1984. They continued to work on Intellivision, Colecovision, and other computer games. They produced Intellivision World Cup Soccer and Championship Tennis, both released in 1985 by European publisher Dextel.", "title": "History" }, { "paragraph_id": 44, "text": "In 1985, INTV Corporation introduced the INTV System III, also branded as the Intellivision Super Pro System, using the same design as the original Intellivision model but in black and silver. That same year INTV Corp introduced two new games that were completed at Mattel but not released: Thunder Castle and World Championship Baseball. With their early success INTV Corp decided to produce new games and in 1986 introduced Super Pro Football, an update of Mattel NFL Football. INTV Corp continued a relationship that Mattel had with Data East and produced all new cartridges such as Commando in 1987 and Body Slam Wrestling in 1988. Also in 1987, INTV Corp released Dig Dug, purchased from Atari where the game was completed but not released in 1984. They also got into producing next generation games with the production of Monster Truck Rally for Nintendo Entertainment System (NES) in 1991, also released as Stadium Mud Buggies for Intellivision in 1989.", "title": "History" }, { "paragraph_id": 45, "text": "Licensing agreements with Nintendo and Sega required INTV Corporation to discontinue the Intellivision in 1990. INTV Corporation did publish 21 new Intellivision cartridges bringing the Intellivision library to a total of 124 cartridges plus one compilation cartridge.", "title": "History" }, { "paragraph_id": 46, "text": "In 1989, INTV Corp and World Book Encyclopedia entered into an agreement to manufacture an educational video game system called Tutorvision. It is a modified Intellivision, the case molded in light beige with gold and blue trim. The Exec ROM expanded, system RAM increased to 1.75K, and graphics RAM increased to 2KB. That is enough graphics RAM to define unique graphic tiles for the entire screen.", "title": "History" }, { "paragraph_id": 47, "text": "Games were designed by World Book, J. Hakansson Associates, and programmed by Realtime Associates. Sixteen games were in production, plus one Canadian variation. However, the cartridges and the Tutorvision were never released; instead World Book and INTV Corporation sued each other. In 1990, INTV Corporation filed for bankruptcy protection and closed in 1991.", "title": "History" }, { "paragraph_id": 48, "text": "An unknown number of later Intellivision SuperPro systems have Tutorvision hardware inside. A subset of these units contain the full Tutorvision EXEC and can play Tutorvision games.", "title": "History" }, { "paragraph_id": 49, "text": "Intellivision, Super Video Arcade, Tandyvision One, Intellivision II, INTV System III, Super Pro System", "title": "Hardware" }, { "paragraph_id": 50, "text": "The Intellivision controller features:", "title": "Hardware" }, { "paragraph_id": 51, "text": "The directional pad was called a \"control disc\" and marketed as having the \"functionality of both a joystick and a paddle\". The controller was ranked the fourth worst video game controller by IGN editor Craig Harris.", "title": "Hardware" }, { "paragraph_id": 52, "text": "A July 1980 article in Video magazine said \"Now, arcade addicts can revel in the most sophisticated games this side of the complex simulations designed for high-level computers right in their own livingrooms.\", \"It may not be perfect but it's certainly the best unit offered so far to players of electronic video games.\", \"Those used to joysticks will have to endure a short period of adjustment, but even finicky players will be forced to agree that the company has developed a truly elegant solution to the controller problem.\"", "title": "Reception" }, { "paragraph_id": 53, "text": "Ken Uston published Ken Uston's Guide to Buying and Beating the Home Video Games in 1982 as a guide to potential buyers of console systems/cartridges, as well as a brief strategy guide to numerous cartridge games then in existence. He described Intellivision as \"the most mechanically reliable of the systems… The controller (used during \"many hours of experimentation\") worked with perfect consistency. The unit never had overheating problems, nor were loose wires or other connections encountered.\" However, Uston rated the controls and control system as \"below average\" and the worst of the consoles he tested (including Atari 2600, Magnavox Odyssey², Astrovision, and Fairchild Channel F).", "title": "Reception" }, { "paragraph_id": 54, "text": "Jeff Rovin lists Intellivision as one of the seven major suppliers of videogames in 1982 and mentions it as \"the unchallenged king of graphics\", but says the controllers can be \"difficult to operate\", mentions the fact that if a controller breaks the entire unit must be shipped off for repairs (since they did not detach at first), and explains that the overlays \"are sometimes so stubborn as to tempt one's patience\" .", "title": "Reception" }, { "paragraph_id": 55, "text": "A 1996 article in Next Generation said the Intellivision \"had greater graphics power than the dominant Atari 2600. It was slower than the 2600 and had less software available, but it was known for its superior sports titles.\" A year later, Electronic Gaming Monthly assessed the Intellivision in an overview of older gaming consoles, remarking that the controllers \"were as comfortable as they were practical. The unique disk-shaped directional pad provided unprecedented control for the time, and the numeric keypad opened up new options previously unavailable in console gaming.\" They praised the breadth of the software library but said there was a lack of genuinely stand-out games.", "title": "Reception" }, { "paragraph_id": 56, "text": "Intellivision games became readily available again when Keith Robinson and Stephen Roney, both former Intellivision programmers at Mattel Electronics, obtained exclusive rights to the Intellivision and games in 1997. That year they formed a new company, Intellivision Productions, and made Intellivision for PC Volume 1 available as a free download. Intellivision games could be played on a modern computer for the first time. That download includes three Intellivision games and an MS-DOS Intellivision emulator that plays original game code. It was followed by Volume 2 and another three games including Deep Pockets Super Pro Pool & Billiards; a game completed in 1990 but never released until this download in 1997. In 2000 the Intellipack 3 download was available with another four Intellivision games and emulators for Windows or Macintosh.", "title": "Legacy" }, { "paragraph_id": 57, "text": "Intellivision Productions released Intellivision Lives! and Intellivision Rocks on compact disc in 1998 and 2001. These compilation CDs play the original game code through emulators for MS-DOS, Windows, and Macintosh computers. Together they have over 100 Intellivision games including never before released King of the Mountain, Takeover, Robot Rubble, League of Light, and others. Intellivision Rocks includes Intellivision games made by Activision and Imagic. Some games could not be included due to licensing, others simply used different titles to avoid trademarked names. The CDs are also a resource for development history, box art, hidden features, programmer biographies, video interviews, and original commercials.", "title": "Legacy" }, { "paragraph_id": 58, "text": "Also in 1997 Intellivision Productions announced they would sell development tools allowing customers to program their own Intellivision games. They were to provide documentation, PC compatible cross-assemblers, and the Magus II PC Intellivision cartridge interface. Unfortunately, the project was cancelled but they did provide copies of \"Your Friend the EXEC\", the programmers guide to the Intellivision Executive control software. By 2000 Intellivision hobbyists ultimately created their own development tools, including Intellivision memory cartridges.", "title": "Legacy" }, { "paragraph_id": 59, "text": "In 2005 Intellivision Productions announced that new Intellivision cartridges were to be produced. \"Deep Pockets and Illusions will be the first two releases in a series of new cartridges for the Intellivision. The printed circuit boards, the cartridge casings, the boxes are all being custom manufactured for this special series.\" Illusions was completed at Mattel Electronics' French office in 1983 but never released. Deep Pockets Super Pro Pool & Billiards was programmed for INTV Corporation in 1990 and only released as a ROM file in 1998. However, no cartridges were produced. Previously, in 2000, Intellivision Productions did release new cartridges for the Atari 2600 and Colecovision. Sea Battle and Swordfight were Atari 2600 games created by Mattel Electronics in the early 1980s but not previously released. Steamroller (Colecovision) was developed for Activision in 1984 and not previously released.", "title": "Legacy" }, { "paragraph_id": 60, "text": "Also in 1999, Activision released A Collection of Intellivision Classic Games for PlayStation. Also known as Intellivision Classics, it has 30 emulated Intellivision games as well as video interviews of some of the original programmers. All of the games were licensed from Intellivision Productions and none of the Activision or Imagic Intellivision games were included. In 2003, Crave Entertainment released a PlayStation 2 version of Intellivision Lives! and then Xbox and GameCube version in 2004. In 2010, Virtual Play Games released Intellivision Lives! for the Nintendo DS including one never before released game, Blow Out. In 2008 Microsoft made Intellivision Lives! an available download on the Xbox Live Marketplace as an Xbox Original and playable on the Xbox 360.", "title": "Legacy" }, { "paragraph_id": 61, "text": "In 2003, the Intellivision 25 and Intellivision 15 direct-to-TV systems were released by Techno Source Ltd. These are an all-in-one single controller design that plugs directly into a television. One includes 25 games the other ten. These Intellivision games were not emulated but rewritten for the native processor (NES-based hardware) and adapted to a contemporary controller. As such they look and play differently than Intellivision. In 2005 they were updated for two-player play as the Intellivision X2 with 15 games. They were commercially very successful altogether selling about 4 million units by end of 2006.", "title": "Legacy" }, { "paragraph_id": 62, "text": "Several licensed Intellivision games became available to Windows computers through the GameTap subscription gaming service in 2005 including Astrosmash, Buzz Bombers, Hover Force, Night Stalker, Pinball, Shark! Shark!, Skiing and Snafu. Installation of the GameTap Player software was required to access the emulator and games. The VH1 Online Arcade made nine Intellivision games available in 2007. Using a Shockwave emulator these Intellivision games could be played directly through a web browser with Shockwave Player. In 2010, VH1 Classic and MTV Networks released 6 Intellivision games to iOS. Intellivision games were first adapted to mobile phones and published by THQ Wireless in 2001. On March 24, 2010, Microsoft launched the Game Room service for Xbox Live and Games for Windows Live. This service includes support for Intellivision games and allows players to compete for high scores via online leaderboards. At the 2011 Consumer Electronics Show, Microsoft announced a version of Game Room for Windows Phone, promising a catalog of 44 Intellivision games. AtGames and its Direct2Drive digital store has Windows compatible Intellivision compilations available for download purchase.", "title": "Legacy" }, { "paragraph_id": 63, "text": "The number of Intellivision games that can be played effectively with contemporary game controllers is limited. On October 1, 2014, AtGames Digital Media, Inc., under license from Intellivision Productions, Inc., released the Intellivision Flashback classic game console. It is a miniature sized Intellivision console with two original sized Intellivision controllers. While adapters have been available to interface original Intellivision controllers to personal computers, the Intellivision Flashback includes two new Intellivision controllers identical in layout and function to the originals. It comes with 60 (61 at Dollar General) emulated Intellivision games built into ROM and a sample set of plastic overlays for 10 games. The Advanced Dungeons & Dragons games were included as Crown of Kings and Minotaur. As with many of the other Intellivision compilations, no games requiring third party licensing were included.", "title": "Legacy" }, { "paragraph_id": 64, "text": "In May 2018, Tommy Tallarico announced that he acquired the rights to the Intellivision brand and games with plans to launch a new home video game console. A new company, Intellivision Entertainment, was formed with Tallarico serving as president. Intellivision Productions has been renamed Blue Sky Rangers Inc. and their video game intellectual property has been transferred to Intellivision Entertainment.", "title": "Legacy" } ]
The Intellivision is a home video game console released by Mattel Electronics in 1979. The name is a portmanteau of "intelligent television". Development began in 1977, the same year as the launch of its main competitor, the Atari 2600. In 1984, Mattel sold its video game assets to a former Mattel Electronics executive and investors, eventually becoming INTV Corporation. Game development ran from 1978 to 1990, when the Intellivision was discontinued. From 1980 to 1983, more than 3.75 million consoles were sold. As per Intellivision Entertainment the final tally through 1990 is somewhere between 4.5 and 5 million consoles sold. In 2009, IGN ranked the Intellivision No. 14 on their list of the greatest video game consoles of all time. It remained Mattel's only video game console until the HyperScan in 2006.
2001-12-03T02:01:39Z
2023-11-11T19:37:57Z
[ "Template:Short description", "Template:Cite news", "Template:Second generation game consoles", "Template:About", "Template:Infobox information appliance", "Template:'", "Template:Cite journal", "Template:Cite magazine", "Template:Commons category", "Template:Mattel", "Template:US$", "Template:Val", "Template:Main", "Template:Clarify", "Template:See also", "Template:Webarchive", "Template:Dead link", "Template:Home video game consoles", "Template:Anchor", "Template:Reflist", "Template:Cite web", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Intellivision
15,316
Imperialism
Imperialism is the practice, theory or attitude of maintaining or extending power over foreign nations, particularly through expansionism, employing both hard power (military and economic power) and soft power (diplomatic power and cultural imperialism). Imperialism focuses on establishing or maintaining hegemony and a more or less formal empire. While related to the concepts of colonialism, imperialism is a distinct concept that can apply to other forms of expansion and many forms of government. The word imperialism originated from the Latin word imperium, which means supreme power, "sovereignty", or simply "rule". The word “imperialism” was originally coined in the 19th century to decry Napoleon’s despotic militarism and became common in the current sense in Great Britain during the 1870s, when it was used with a negative connotation. By the end of the 19th century it was being used to describe the behavior of empires at all times and places. Hannah Arendt and Joseph Schumpeter defined imperialism as expansion for the sake of expansion. Previously, the term had been used to describe what was perceived as Napoleon III's attempts at obtaining political support through foreign military interventions. The term was and is mainly applied to Western and Japanese political and economic dominance, especially in Asia and Africa, in the 19th and 20th centuries. Its precise meaning continues to be debated by scholars. Some writers, such as Edward Said, use the term more broadly to describe any system of domination and subordination organized around an imperial core and a periphery. This definition encompasses both nominal empires and neocolonialism. The term "imperialism" is often conflated with "colonialism"; however, many scholars have argued that each has its own distinct definition. Imperialism and colonialism have been used in order to describe one's influence upon a person or group of people. Robert Young writes that imperialism operates from the centre as a state policy and is developed for ideological as well as financial reasons, while colonialism is simply the development for settlement or commercial intentions; however, colonialism still includes invasion. Colonialism in modern usage also tends to imply a degree of geographic separation between the colony and the imperial power. Particularly, Edward Said distinguishes between imperialism and colonialism by stating: "imperialism involved 'the practice, the theory and the attitudes of a dominating metropolitan center ruling a distant territory', while colonialism refers to the 'implanting of settlements on a distant territory.' Contiguous land empires such as the Russian or Ottoman have traditionally been excluded from discussions of colonialism, though this is beginning to change, since it is accepted that they also sent populations into the territories they ruled. Imperialism and colonialism both dictate the political and economic advantage over a land and the indigenous populations they control, yet scholars sometimes find it difficult to illustrate the difference between the two. Although imperialism and colonialism focus on the suppression of another, if colonialism refers to the process of a country taking physical control of another, imperialism refers to the political and monetary dominance, either formally or informally. Colonialism is seen to be the architect deciding how to start dominating areas and then imperialism can be seen as creating the idea behind conquest cooperating with colonialism. Colonialism is when the imperial nation begins a conquest over an area and then eventually is able to rule over the areas the previous nation had controlled. Colonialism's core meaning is the exploitation of the valuable assets and supplies of the nation that was conquered and the conquering nation then gaining the benefits from the spoils of the war. The meaning of imperialism is to create an empire, by conquering the other state's lands and therefore increasing its own dominance. Colonialism is the builder and preserver of the colonial possessions in an area by a population coming from a foreign region. Colonialism can completely change the existing social structure, physical structure, and economics of an area; it is not unusual that the characteristics of the conquering peoples are inherited by the conquered indigenous populations. Few colonies remain remote from their mother country. Thus, most will eventually establish a separate nationality or remain under complete control of their mother colony. The Soviet leader Vladimir Lenin suggested that "imperialism was the highest form of capitalism, claiming that imperialism developed after colonialism, and was distinguished from colonialism by monopoly capitalism". The Age of Imperialism, a time period beginning around 1760, saw European industrializing nations, engaging in the process of colonizing, influencing, and annexing other parts of the world. 19th century episodes included the "Scramble for Africa." In the 1970s British historians John Gallagher (1919–1980) and Ronald Robinson (1920–1999) argued that European leaders rejected the notion that "imperialism" required formal, legal control by one government over a colonial region. Much more important was informal control of independent areas. According to Wm. Roger Louis, "In their view, historians have been mesmerized by formal empire and maps of the world with regions colored red. The bulk of British emigration, trade, and capital went to areas outside the formal British Empire. Key to their thinking is the idea of empire 'informally if possible and formally if necessary.'" Oron Hale says that Gallagher and Robinson looked at the British involvement in Africa where they "found few capitalists, less capital, and not much pressure from the alleged traditional promoters of colonial expansion. Cabinet decisions to annex or not to annex were made, usually on the basis of political or geopolitical considerations." Looking at the main empires from 1875 to 1914, there was a mixed record in terms of profitability. At first, planners expected that colonies would provide an excellent captive market for manufactured items. Apart from the Indian subcontinent, this was seldom true. By the 1890s, imperialists saw the economic benefit primarily in the production of inexpensive raw materials to feed the domestic manufacturing sector. Overall, Great Britain did very well in terms of profits from India, especially Mughal Bengal, but not from most of the rest of its empire. According to Indian Economist Utsa Patnaik, the scale of the wealth transfer out of India, between 1765 and 1938, was an estimated $45 Trillion. The Netherlands did very well in the East Indies. Germany and Italy got very little trade or raw materials from their empires. France did slightly better. The Belgian Congo was notoriously profitable when it was a capitalistic rubber plantation owned and operated by King Leopold II as a private enterprise. However, scandal after scandal regarding atrocities in the Congo Free State led the international community to force the government of Belgium to take it over in 1908, and it became much less profitable. The Philippines cost the United States much more than expected because of military action against rebels. Because of the resources made available by imperialism, the world's economy grew significantly and became much more interconnected in the decades before World War I, making the many imperial powers rich and prosperous. Europe's expansion into territorial imperialism was largely focused on economic growth by collecting resources from colonies, in combination with assuming political control by military and political means. The colonization of India in the mid-18th century offers an example of this focus: there, the "British exploited the political weakness of the Mughal state, and, while military activity was important at various times, the economic and administrative incorporation of local elites was also of crucial significance" for the establishment of control over the subcontinent's resources, markets, and manpower. Although a substantial number of colonies had been designed to provide economic profit and to ship resources to home ports in the 17th and 18th centuries, D. K. Fieldhouse suggests that in the 19th and 20th centuries in places such as Africa and Asia, this idea is not necessarily valid: Modern empires were not artificially constructed economic machines. The second expansion of Europe was a complex historical process in which political, social and emotional forces in Europe and on the periphery were more influential than calculated imperialism. Individual colonies might serve an economic purpose; collectively no empire had any definable function, economic or otherwise. Empires represented only a particular phase in the ever-changing relationship of Europe with the rest of the world: analogies with industrial systems or investment in real estate were simply misleading. During this time, European merchants had the ability to "roam the high seas and appropriate surpluses from around the world (sometimes peaceably, sometimes violently) and to concentrate them in Europe". European expansion greatly accelerated in the 19th century. To obtain raw materials, Europe expanded imports from other countries and from the colonies. European industrialists sought raw materials such as dyes, cotton, vegetable oils, and metal ores from overseas. Concurrently, industrialization was quickly making Europe the centre of manufacturing and economic growth, driving resource needs. Communication became much more advanced during European expansion. With the invention of railroads and telegraphs, it became easier to communicate with other countries and to extend the administrative control of a home nation over its colonies. Steam railroads and steam-driven ocean shipping made possible the fast, cheap transport of massive amounts of goods to and from colonies. Along with advancements in communication, Europe also continued to advance in military technology. European chemists made new explosives that made artillery much more deadly. By the 1880s, the machine gun had become a reliable battlefield weapon. This technology gave European armies an advantage over their opponents, as armies in less-developed countries were still fighting with arrows, swords, and leather shields (e.g. the Zulus in Southern Africa during the Anglo-Zulu War of 1879). Some exceptions of armies that managed to get nearly on par with the European expeditions and standards include the Ethiopian armies at the Battle of Adwa, and the Japanese Imperial Army of Japan, but these still relied heavily on weapons imported from Europe and often on European military advisors. Anglophone academic studies often base their theories regarding imperialism on the British experience of Empire. The term imperialism was originally introduced into English in its present sense in the late 1870s by opponents of the allegedly aggressive and ostentatious imperial policies of British Prime Minister Benjamin Disraeli. Supporters of "imperialism" such as Joseph Chamberlain quickly appropriated the concept. For some, imperialism designated a policy of idealism and philanthropy; others alleged that it was characterized by political self-interest, and a growing number associated it with capitalist greed. In Imperialism: A Study (1902), John A. Hobson developed a highly influential interpretation of imperialism that expanded on his belief that free enterprise capitalism had a negative impact on the majority of the population. In Imperialism he argued that the financing of overseas empires drained money that was needed at home. It was invested abroad because of lower wages paid to the workers overseas made for higher profits and higher rates of return, compared to domestic wages. So although domestic wages remained higher, they did not grow nearly as fast as they might have otherwise. Exporting capital, he concluded, put a lid on the growth of domestic wages in the domestic standard of living. By the 1970s, historians such as David K. Fieldhouse and Oron Hale could argue that "the Hobsonian foundation has been almost completely demolished." The British experience failed to support it. However, European Marxists picked up Hobson's ideas and made it into their own theory of imperialism, most notably in Vladimir Lenin's Imperialism, the Highest Stage of Capitalism (1916). Lenin portrayed Imperialism as the closure of the world market and the end of capitalist free-competition that arose from the need for capitalist economies to constantly expand investment, material resources and manpower in such a way that necessitated colonial expansion. Later Marxist theoreticians echo this conception of imperialism as a structural feature of capitalism, which explained the World War as the battle between imperialists for control of external markets. Lenin's treatise became a standard textbook that flourished until the collapse of communism in 1989–91. Some theoreticians on the non-Communist left have emphasized the structural or systemic character of "imperialism". Such writers have expanded the period associated with the term so that it now designates neither a policy, nor a short space of decades in the late 19th century, but a world system extending over a period of centuries, often going back to Colonization and, in some accounts, to the Crusades. As the application of the term has expanded, its meaning has shifted along five distinct but often parallel axes: the moral, the economic, the systemic, the cultural, and the temporal. Those changes reflect—among other shifts in sensibility—a growing unease, even great distaste, with the pervasiveness of such power, specifically, Western power. Historians and political theorists have long debated the correlation between capitalism, class and imperialism. Much of the debate was pioneered by such theorists as J. A. Hobson (1858–1940), Joseph Schumpeter (1883–1950), Thorstein Veblen (1857–1929), and Norman Angell (1872–1967). While these non-Marxist writers were at their most prolific before World War I, they remained active in the interwar years. Their combined work informed the study of imperialism and its impact on Europe, as well as contributing to reflections on the rise of the military-political complex in the United States from the 1950s. Hobson argued that domestic social reforms could cure the international disease of imperialism by removing its economic foundation. Hobson theorized that state intervention through taxation could boost broader consumption, create wealth, and encourage a peaceful, tolerant, multipolar world order. Walter Rodney, in his 1972 How Europe Underdeveloped Africa, proposes the idea that imperialism is a phase of capitalism "in which Western European capitalist countries, the US, and Japan established political, economic, military and cultural hegemony over other parts of the world which were initially at a lower level and therefore could not resist domination." As a result, Imperialism "for many years embraced the whole world – one part being the exploiters and the other the exploited, one part being dominated and the other acting as overlords, one part making policy and the other being dependent." Imperialism has also been identified in newer phenomena like space development and its governing context. Imperial control, territorial and cultural, is justified through discourses about the imperialists' understanding of different spaces. Conceptually, imagined geographies explain the limitations of the imperialist understanding of the societies of the different spaces inhabited by the non–European Other. In Orientalism (1978), Edward Said said that the West developed the concept of The Orient—an imagined geography of the Eastern world—which functions as an essentializing discourse that represents neither the ethnic diversity nor the social reality of the Eastern world. That by reducing the East into cultural essences, the imperial discourse uses place-based identities to create cultural difference and psychologic distance between "We, the West" and "They, the East" and between "Here, in the West" and "There, in the East". That cultural differentiation was especially noticeable in the books and paintings of early Oriental studies, the European examinations of the Orient, which misrepresented the East as irrational and backward, the opposite of the rational and progressive West. Defining the East as a negative vision of the Western world, as its inferior, not only increased the sense-of-self of the West, but also was a way of ordering the East, and making it known to the West, so that it could be dominated and controlled. Therefore, Orientalism was the ideological justification of early Western imperialism—a body of knowledge and ideas that rationalized social, cultural, political, and economic control of other, non-white peoples. One of the main tools used by imperialists was cartography. Cartography is "the art, science and technology of making maps" but this definition is problematic. It implies that maps are objective representations of the world when in reality they serve very political means. For Harley, maps serve as an example of Foucault's power and knowledge concept. To better illustrate this idea, Bassett focuses his analysis of the role of 19th-century maps during the "Scramble for Africa". He states that maps "contributed to empire by promoting, assisting, and legitimizing the extension of French and British power into West Africa". During his analysis of 19th-century cartographic techniques, he highlights the use of blank space to denote unknown or unexplored territory. This provided incentives for imperial and colonial powers to obtain "information to fill in blank spaces on contemporary maps". Although cartographic processes advanced through imperialism, further analysis of their progress reveals many biases linked to eurocentrism. According to Bassett, "[n]ineteenth-century explorers commonly requested Africans to sketch maps of unknown areas on the ground. Many of those maps were highly regarded for their accuracy" but were not printed in Europe unless Europeans verified them. Imperialism in pre-modern times was common in the form of expansionism through vassalage and conquest. The concept of cultural imperialism refers to the cultural influence of one dominant culture over others, i.e. a form of soft power, which changes the moral, cultural, and societal worldview of the subordinate culture. This means more than just "foreign" music, television or film becoming popular with young people; rather that a populace changes its own expectations of life, desiring for their own country to become more like the foreign country depicted. For example, depictions of opulent American lifestyles in the soap opera Dallas during the Cold War changed the expectations of Romanians; a more recent example is the influence of smuggled South Korean drama-series in North Korea. The importance of soft power is not lost on authoritarian regimes, which may oppose such influence with bans on foreign popular culture, control of the internet and of unauthorized satellite dishes, etc. Nor is such a usage of culture recent – as part of Roman imperialism, local elites would be exposed to the benefits and luxuries of Roman culture and lifestyle, with the aim that they would then become willing participants. Imperialism has been subject to moral or immoral censure by its critics, and thus the term "imperialism" is frequently used in international propaganda as a pejorative for expansionist and aggressive foreign policy. An empire mentality may build on and bolster views contrasting "primitive" and "advanced" peoples and cultures, thus justifying and encouraging imperialist practices among participants. Associated psychological tropes include the White Man's Burden and the idea of civilizing mission (French: mission civilatrice). The political concept social imperialism is a Marxist expression first used in the early 20th century by Lenin as "socialist in words, imperialist in deeds" describing the Fabian Society and other socialist organizations. Later, in a split with the Soviet Union, Mao Zedong criticized its leaders as social imperialists. Stephen Howe has summarized his view on the beneficial effects of the colonial empires: At least some of the great modern empires – the British, French, Austro-Hungarian, Russian, and even the Ottoman – have virtues that have been too readily forgotten. They provided stability, security, and legal order for their subjects. They constrained, and at their best, tried to transcend, the potentially savage ethnic or religious antagonisms among the peoples. And the aristocracies which ruled most of them were often far more liberal, humane, and cosmopolitan than their supposedly ever more democratic successors. A controversial aspect of imperialism is the defense and justification of empire-building based on seemingly rational grounds. In ancient China, Tianxia denoted the lands, space, and area divinely appointed to the Emperor by universal and well-defined principles of order. The center of this land was directly apportioned to the Imperial court, forming the center of a world view that centered on the Imperial court and went concentrically outward to major and minor officials and then the common citizens, tributary states, and finally ending with the fringe "barbarians". Tianxia's idea of hierarchy gave Chinese a privileged position and was justified through the promise of order and peace. J. A. Hobson identifies this justification on general grounds as: "It is desirable that the earth should be peopled, governed, and developed, as far as possible, by the races which can do this work best, i.e. by the races of highest 'social efficiency'". Many others argued that imperialism is justified for several different reasons. Friedrich Ratzel believed that in order for a state to survive, imperialism was needed. Halford Mackinder felt that Great Britain needed to be one of the greatest imperialists and therefore justified imperialism. The purportedly scientific nature of "Social Darwinism" and a theory of races formed a supposedly rational justification for imperialism. Under this doctrine, the French politician Jules Ferry could declare in 1883 that "Superior races have a right, because they have a duty. They have the duty to civilize the inferior races." The rhetoric of colonizers being racially superior appears to have achieved its purpose, for example throughout Latin America "whiteness" is still prized today and various forms of blanqueamiento (whitening) are common. The Royal Geographical Society of London and other geographical societies in Europe had great influence and were able to fund travelers who would come back with tales of their discoveries. These societies also served as a space for travellers to share these stories. Political geographers such as Friedrich Ratzel of Germany and Halford Mackinder of Britain also supported imperialism. Ratzel believed expansion was necessary for a state's survival while Mackinder supported Britain's imperial expansion; these two arguments dominated the discipline for decades. Geographical theories such as environmental determinism also suggested that tropical environments created uncivilized people in need of European guidance. For instance, American geographer Ellen Churchill Semple argued that even though human beings originated in the tropics they were only able to become fully human in the temperate zone. Tropicality can be paralleled with Edward Said's Orientalism as the west's construction of the east as the "other". According to Said, orientalism allowed Europe to establish itself as the superior and the norm, which justified its dominance over the essentialized Orient. Technology and economic efficiency were often improved in territories subjected to imperialism through the building of roads, other infrastructure and introduction of new technologies. The principles of imperialism are often generalizable to the policies and practices of the British Empire "during the last generation, and proceeds rather by diagnosis than by historical description". British imperialism in some sparsely-inhabited regions appears to have applied a principle now termed Terra nullius (Latin expression which stems from Roman law meaning 'no man's land'). The country of Australia serves as a case study in relation to British settlement and colonial rule of the continent in the 18th century, that was arguably premised on terra nullius, as its settlers considered it unused by its original inhabitants. The concept of environmental determinism served as a moral justification for the domination of certain territories and peoples. The environmental determinist school of thought held that the environment in which certain people lived determined those persons' behaviours; and thus validated their domination. For example, the Western world saw people living in tropical environments as "less civilized", therefore justifying colonial control as a civilizing mission. Across the three major waves of European colonialism (the first in the Americas, the second in Asia and the last in Africa), environmental determinism served to place categorically indigenous people in a racial hierarchy. This takes two forms, orientalism and tropicality. Some geographic scholars under colonizing empires divided the world into climatic zones. These scholars believed that Northern Europe and the Mid-Atlantic temperate climate produced a hard-working, moral, and upstanding human being. In contrast, tropical climates allegedly yielded lazy attitudes, sexual promiscuity, exotic culture, and moral degeneracy. The people of these climates were believed to be in need of guidance and intervention from a European empire to aid in the governing of a more evolved social structure; they were seen as incapable of such a feat. Similarly, orientalism could promote a view of a people based on their geographical location. Anti-imperialism gained a wide currency after the Second World War and at the onset of the Cold War as political movements in colonies of European powers promoted national sovereignty. Some anti-imperialist groups who opposed the United States supported the power of the Soviet Union, such as in Guevarism, while in Maoism this was criticized as social imperialism. Pan-Africanism is a movement across Africa and the world that came as a result of imperial ideas splitting apart African nations and pitting them against each other. The Pan-African movement instead tried to reverse those ideas by uniting Africans and creating a sense of brotherhood among all African people. The Pan-African movement helped with the eventual end of Colonialism in Africa. Representatives at the 1900 Pan African Conference demanded moderate reforms for colonial African nations. The conference also discussed African populations in the Caribbean and the United States and their rights. There was a total of 6 Pan-African conferences that were held and these allowed the African people to have a voice in ending colonial rule. The Roman Empire was the post-Republican period of ancient Rome. As a polity, it included large territorial holdings around the Mediterranean Sea in Europe, North Africa, and Western Asia, ruled by emperors. England's imperialist ambitions can be seen as early as the 16th century as the Tudor conquest of Ireland began in the 1530s. In 1599 the British East India Company was established and was chartered by Queen Elizabeth in the following year. With the establishment of trading posts in India, the British were able to maintain strength relative to other empires such as the Portuguese who already had set up trading posts in India. Between 1621 and 1699, the Kingdom of Scotland authorised several colonies in the Americas. Most of these colonies were either aborted or collapsed quickly for various reasons. Under the Acts of Union 1707, the English and Scottish kingdoms were merged, and their colonies collectively became subject to Great Britain (also known as the United Kingdom). The empire Great Britain would go on to found was the largest empire that the world has ever seen both in terms of landmass and population. Its power, both military and economic, remained unmatched for a few decades. In 1767, the Anglo-Mysore Wars and other political activity caused exploitation of the East India Company causing the plundering of the local economy, almost bringing the company into bankruptcy. By the year 1670 Britain's imperialist ambitions were well off as she had colonies in Virginia, Massachusetts, Bermuda, Honduras, Antigua, Barbados, Jamaica and Nova Scotia. Due to the vast imperialist ambitions of European countries, Britain had several clashes with France. This competition was evident in the colonization of what is now known as Canada. John Cabot claimed Newfoundland for the British while the French established colonies along the St. Lawrence River and claiming it as "New France". Britain continued to expand by colonizing countries such as New Zealand and Australia, both of which were not empty land as they had their own locals and cultures. Britain's nationalistic movements were evident with the creation of the commonwealth countries where there was a shared nature of national identity. Following the proto-industrialization, the "First" British Empire was based on mercantilism, and involved colonies and holdings primarily in North America, the Caribbean, and India. Its growth was reversed by the loss of the American colonies in 1776. Britain made compensating gains in India, Australia, and in constructing an informal economic empire through control of trade and finance in Latin America after the independence of Spanish and Portuguese colonies in about 1820. By the 1840s, Britain had adopted a highly successful policy of free trade that gave it dominance in the trade of much of the world. After losing its first Empire to the Americans, Britain then turned its attention towards Asia, Africa, and the Pacific. Following the defeat of Napoleonic France in 1815, Britain enjoyed a century of almost unchallenged dominance and expanded its imperial holdings around the globe. Unchallenged at sea, British dominance was later described as Pax Britannica ("British Peace"), a period of relative peace in Europe and the world (1815–1914) during which the British Empire became the global hegemon and adopted the role of global policeman. However, this peace was mostly a perceived one from Europe, and the period was still an almost uninterrupted series of colonial wars and disputes. The British Conquest of India, its intervention against Mehemet Ali, the Anglo-Burmese Wars, the Crimean War, the Opium Wars and the Scramble for Africa to name the most notable conflicts mobilised ample military means to press Britain's lead in the global conquest Europe led across the century. In the early 19th century, the Industrial Revolution began to transform Britain; by the time of the Great Exhibition in 1851 the country was described as the "workshop of the world". The British Empire expanded to include India, large parts of Africa and many other territories throughout the world. Alongside the formal control it exerted over its own colonies, British dominance of much of world trade meant that it effectively controlled the economies of many regions, such as Asia and Latin America. Domestically, political attitudes favoured free trade and laissez-faire policies and a gradual widening of the voting franchise. During this century, the population increased at a dramatic rate, accompanied by rapid urbanisation, causing significant social and economic stresses. To seek new markets and sources of raw materials, the Conservative Party under Disraeli launched a period of imperialist expansion in Egypt, South Africa, and elsewhere. Canada, Australia, and New Zealand became self-governing dominions. A resurgence came in the late 19th century with the Scramble for Africa and major additions in Asia and the Middle East. The British spirit of imperialism was expressed by Joseph Chamberlain and Lord Rosebury, and implemented in Africa by Cecil Rhodes. The pseudo-sciences of Social Darwinism and theories of race formed an ideological underpinning and legitimation during this time. Other influential spokesmen included Lord Cromer, Lord Curzon, General Kitchener, Lord Milner, and the writer Rudyard Kipling. After the First Boer War, the South African Republic and Orange Free State were recognised by Britain but eventually re-annexed after the Second Boer War. But British power was fading, as the reunited German state founded by the Kingdom of Prussia posed a growing threat to Britain's dominance. As of 1913, Britain was the world's fourth economy, behind the U.S, Russia and Germany. Irish War of Independence in 1919–1921 led to the сreation of the Irish Free State. But Britain gained control of former German and Ottoman colonies with the League of Nations mandate. Britain now had a practically continuous line of controlled territories from Egypt to Burma and another one from Cairo to Cape Town. However, this period was also the one of the emergence of independence movements based on nationalism and new experiences the colonists had gained in the war. World War II decisively weakened Britain's position in the world, especially financially. Decolonization movements arose nearly everywhere in the Empire, resulting in Indian independence and partition in 1947, the self-governing dominions break away from the empire in 1949, and the establishment of independent states in the 1950s. British imperialism showed its frailty in Egypt during the Suez Crisis in 1956. However, with the United States and Soviet Union emerging from World War II as the sole superpowers, Britain's role as a worldwide power declined significantly and rapidly. In Canada, the "imperialism" (and the related term "colonialism") has had a variety of contradictory meanings since the 19th century. In the late 19th and early 20th, to be an "imperialist" meant thinking of Canada as a part of the British nation not a separate nation. The older words for the same concepts were "loyalism" or "unionism", which continued to be used as well. In mid-twentieth Canada, the world "imperialism" and "colonialism" were used in English Canadian discourse to instead portray Canada as a victim of economic and cultural penetration by the United States. In twentieth century French-Canadian discourse the "imperialists" were all the Anglo-Saxon countries including Canada who were oppressing French-speakers and the province of Quebec. By the early 21st century, "colonialism" was used to highlight supposed anti-indigenous attitudes and actions of Canada inherited from the British period. China was one of the world's oldest empires. Due to its long history of imperialist expansion, China has been seen by its neighboring countries as a threat due to its large population, giant economy, large military force as well as its territorial evolution throughout history. Starting with the unification of China under the Qin dynasty, later Chinese dynasties continued to follow its form of expansions. The most successful Chinese imperial dynasties in terms of territorial expansion were the Han, Tang, Yuan, and Qing dynasties. Denmark–Norway (Denmark after 1814) possessed overseas colonies from 1536 until 1953. At its apex there were colonies on four continents: Europe, North America, Africa and Asia. In the 17th century, following territorial losses on the Scandinavian Peninsula, Denmark-Norway began to develop colonies, forts, and trading posts in West Africa, the Caribbean, and the Indian subcontinent. Christian IV first initiated the policy of expanding Denmark-Norway's overseas trade, as part of the mercantilist wave that was sweeping Europe. Denmark-Norway's first colony was established at Tranquebar on India's southern coast in 1620. Admiral Ove Gjedde led the expedition that established the colony. After 1814, when Norway was ceded to Sweden, Denmark retained what remained of Norway's great medieval colonial holdings. One by one the smaller colonies were lost or sold. Tranquebar was sold to the British in 1845. The United States purchased the Danish West Indies in 1917. Iceland became independent in 1944. Today, the only remaining vestiges are two originally Norwegian colonies that are currently within the Danish Realm, the Faroe Islands and Greenland; the Faroes were a Danish county until 1948, while Greenland's colonial status ceased in 1953. They are now autonomous territories. During the 16th century, the French colonization of the Americas began with the creation of New France. It was followed by French East India Company's trading posts in Africa and Asia in the 17th century. France had its "First colonial empire" from 1534 until 1814, including New France (Canada, Acadia, Newfoundland and Louisiana), French West Indies (Saint-Domingue, Guadeloupe, Martinique), French Guiana, Senegal (Gorée), Mascarene Islands (Mauritius Island, Réunion) and French India. Its "Second colonial empire" began with the seizure of Algiers in 1830 and came for the most part to an end with the granting of independence to Algeria in 1962. The French imperial history was marked by numerous wars, large and small, and also by significant help to France itself from the colonials in the world wars. France took control of Algeria in 1830 but began in earnest to rebuild its worldwide empire after 1850, concentrating chiefly in North and West Africa (French North Africa, French West Africa, French Equatorial Africa), as well as South-East Asia (French Indochina), with other conquests in the South Pacific (New Caledonia, French Polynesia). France also twice attempted to make Mexico a colony in 1838–39 and in 1861–67 (see Pastry War and Second French intervention in Mexico). French Republicans, at first hostile to empire, only became supportive when Germany started to build her own colonial empire. As it developed, the new empire took on roles of trade with France, supplying raw materials and purchasing manufactured items, as well as lending prestige to the motherland and spreading French civilization and language as well as Catholicism. It also provided crucial manpower in both World Wars. It became a moral justification to lift the world up to French standards by bringing Christianity and French culture. In 1884 the leading exponent of colonialism, Jules Ferry declared France had a civilising mission: "The higher races have a right over the lower races, they have a duty to civilize the inferior". Full citizenship rights – assimilation – were offered, although in reality assimilation was always on the distant horizon. Contrasting from Britain, France sent small numbers of settlers to its colonies, with the only notable exception of Algeria, where French settlers nevertheless always remained a small minority. The French colonial empire of extended over 11,500,000 km (4,400,000 sq mi) at its height in the 1920s and had a population of 110 million people on the eve of World War II. In World War II, Charles de Gaulle and the Free French used the overseas colonies as bases from which they fought to liberate France. However, after 1945 anti-colonial movements began to challenge the Empire. France fought and lost a bitter war in Vietnam in the 1950s. Whereas they won the war in Algeria, de Gaulle decided to grant Algeria independence anyway in 1962. French settlers and many local supporters relocated to France. Nearly all of France's colonies gained independence by 1960, but France retained great financial and diplomatic influence. It has repeatedly sent troops to assist its former colonies in Africa in suppressing insurrections and coups d'état. French colonial officials, influenced by the revolutionary ideal of equality, standardized schools, curricula, and teaching methods as much as possible. They did not establish colonial school systems with the idea of furthering the ambitions of the local people, but rather simply exported the systems and methods in vogue in the mother nation. Having a moderately trained lower bureaucracy was of great use to colonial officials. The emerging French-educated indigenous elite saw little value in educating rural peoples. After 1946 the policy was to bring the best students to Paris for advanced training. The result was to immerse the next generation of leaders in the growing anti-colonial diaspora centered in Paris. Impressionistic colonials could mingle with studious scholars or radical revolutionaries or so everything in between. Ho Chi Minh and other young radicals in Paris formed the French Communist party in 1920. Tunisia was exceptional. The colony was administered by Paul Cambon, who built an educational system for colonists and indigenous people alike that was closely modeled on mainland France. He emphasized female and vocational education. By independence, the quality of Tunisian education nearly equalled that in France. African nationalists rejected such a public education system, which they perceived as an attempt to retard African development and maintain colonial superiority. One of the first demands of the emerging nationalist movement after World War II was the introduction of full metropolitan-style education in French West Africa with its promise of equality with Europeans. In Algeria, the debate was polarized. The French set up schools based on the scientific method and French culture. The Pied-Noir (Catholic migrants from Europe) welcomed this. Those goals were rejected by the Moslem Arabs, who prized mental agility and their distinctive religious tradition. The Arabs refused to become patriotic and cultured Frenchmen and a unified educational system was impossible until the Pied-Noir and their Arab allies went into exile after 1962. In South Vietnam from 1955 to 1975 there were two competing powers in education, as the French continued their work and the Americans moved in. They sharply disagreed on goals. The French educators sought to preserving French culture among the Vietnamese elites and relied on the Mission Culturelle – the heir of the colonial Direction of Education – and its prestigious high schools. The Americans looked at the great mass of people and sought to make South Vietnam a nation strong enough to stop communism. The Americans had far more money, as USAID coordinated and funded the activities of expert teams, and particularly of academic missions. The French deeply resented the American invasion of their historical zone of cultural imperialism. German expansion into Slavic lands begins in the 12th–13th-century (see Drang Nach Osten). The concept of Drang Nach Osten was a core element of German nationalism and a major element of Nazi ideology. However, the German involvement in the seizure of overseas territories was negligible until the end of the 19th century. Prussia unified the other states into the second German Empire in 1871. Its Chancellor, Otto von Bismarck (1862–90), long opposed colonial acquisitions, arguing that the burden of obtaining, maintaining, and defending such possessions would outweigh any potential benefits. He felt that colonies did not pay for themselves, that the German bureaucratic system would not work well in the tropics and the diplomatic disputes over colonies would distract Germany from its central interest, Europe itself. However, public opinion and elite opinion in Germany demanded colonies for reasons of international prestige, so Bismarck was forced to oblige. In 1883–84 Germany began to build a colonial empire in Africa and the South Pacific. The establishment of the German colonial empire started with German New Guinea in 1884. Within 25 years, German South West Africa had committed the Herero and Namaqua genocide in modern-day Namibia, the first genocide of the 20th century. German colonies included the present territories of in Africa: Tanzania, Rwanda, Burundi, Namibia, Cameroon, Ghana and Togo; in Oceania: New Guinea, Solomon islands, Nauru, Marshall Islands, Mariana Islands, Caroline Islands and Samoa; and in Asia: Qingdao, Yantai and the Jiaozhou Bay. The Treaty of Versailles made them mandates temporarily operated by the Allied victors. Germany also lost part of the Eastern territories that became part of independent Poland as a result of the Treaty of Versailles in 1919. Finally, the Eastern territories captured in the Middle Ages were torn from Germany and became part of Poland and the USSR as a result of the territorial reorganization established by the Potsdam Conference of the great powers in 1945. The Italian Empire (Impero italiano) comprised the overseas possessions of the Kingdom of Italy primarily in northeast Africa. It began with the purchase in 1869 of Assab Bay on the Red Sea by an Italian navigation company which intended to establish a coaling station at the time the Suez Canal was being opened to navigation. This was taken over by the Italian government in 1882, becoming modern Italy's first overseas territory. By the start of the First World War in 1914, Italy had acquired in Africa the colony of Eritrea on the Red Sea coast, a large protectorate and later colony in Somalia, and authority in formerly Ottoman Tripolitania and Cyrenaica (gained after the Italo-Turkish War) which were later unified in the colony of Libya. Outside Africa, Italy possessed the Dodecanese Islands off the coast of Turkey (following the Italo-Turkish War) and a small concession in Tianjin in China following the Boxer War of 1900. During the First World War, Italy occupied southern Albania to prevent it from falling to Austria-Hungary. In 1917, it established a protectorate over Albania, which remained in place until 1920. The Fascist government that came to power with Benito Mussolini in 1922 sought to increase the size of the Italian empire and to satisfy the claims of Italian irredentists. In its second invasion of Ethiopia in 1935–36, Italy was successful and it merged its new conquest with its older east African colonies to create Italian East Africa. In 1939, Italy invaded Albania and incorporated it into the Fascist state. During the Second World War (1939–1945), Italy occupied British Somaliland, parts of south-eastern France, western Egypt and most of Greece, but then lost those conquests and its African colonies, including Ethiopia, to the invading allied forces by 1943. It was forced in the peace treaty of 1947 to relinquish sovereignty over all its colonies. It was granted a trust to administer former Italian Somaliland under United Nations supervision in 1950. When Somalia became independent in 1960, Italy's eight-decade experiment with colonialism ended. For over 200 years, Japan maintained a feudal society during a period of relative isolation from the rest of the world. However, in the 1850s, military pressure from the United States and other world powers coerced Japan to open itself to the global market, resulting in an end to the country's isolation. A period of conflicts and political revolutions followed due to socioeconomic uncertainty, ending in 1868 with the reunification of political power under the Japanese Emperor during the Meiji Restoration. This sparked a period of rapid industrialization driven in part by a Japanese desire for self-sufficiency. By the early 1900s, Japan was a naval power that could hold its own against an established European power as it defeated Russia. Despite its rising population and increasingly industrialized economy, Japan lacked significant natural resources. As a result, the country turned to imperialism and expansionism in part as a means of compensating for these shortcomings, adopting the national motto "Fukoku kyōhei" (富国強兵, "Enrich the state, strengthen the military"). And Japan was eager to take every opportunity. In 1869 they took advantage of the defeat of the rebels of the Republic of Ezo to incorporate definitely the island of Hokkaido to Japan. For centuries, Japan viewed the Ryukyu Islands as one of its provinces. In 1871 the Mudan incident happened: Taiwanese aborigines murdered 54 Ryūkyūan sailors that became shipwrecked. At that time the Ryukyu Islands were claimed by both Qing China and Japan, and the Japanese interpreted the incident as an attack on their citizens. They took steps to bring the islands in their jurisdiction: in 1872 the Japanese Ryukyu Domain was declared, and in 1874 a retaliatory incursion to Taiwan was sent, which was a success. The success of this expedition emboldened the Japanese: not even the Americans could defeat the Taiwanese in the Formosa Expedition of 1867. Very few gave it much thought at the time, but this was the first move in the Japanese expansionism series. Japan occupied Taiwan for the rest of 1874 and then left owing to Chinese pressures, but in 1879 it finally annexed the Ryukyu Islands. In 1875 Qing China sent a 300-men force to subdue the Taiwanese, but unlike the Japanese the Chinese were routed, ambushed and 250 of their men were killed; the failure of this expedition exposed once more the failure of Qing China to exert effective control in Taiwan, and acted as another incentive for the Japanese to annex Taiwan. Eventually, the spoils for winning the First Sino-Japanese War in 1894 included Taiwan. In 1875 Japan took its first operation against Joseon Korea, another territory that for centuries it coveted; the Ganghwa Island incident made Korea open to international trade. Korea was annexed in 1910. As a result of winning the Russo-Japanese War in 1905, Japan took part of Sakhalin Island from Russia. Precisely, the victory against the Russian Empire shook the world: never before had an Asian nation defeated a European power, and in Japan it was seen as a feat. Japan's victory against Russia would act as an antecedent for Asian countries in the fight against the Western powers for Decolonization. During World War I, Japan took German-leased territories in China's Shandong Province, as well as the Mariana, Caroline, and Marshall Islands, and kept the islands as League of nations mandates. At first, Japan was in good standing with the victorious Allied powers of World War I, but different discrepancies and dissatisfaction with the rewards of the treaties cooled the relations with them, for example American pressure forced it to return the Shandong area. By the '30s, economic depression, urgency of resources and a growing distrust in the Allied powers made Japan lean to a hardened militaristic stance. Through the decade, it would grow closer to Germany and Italy, forming together the Axis alliance. In 1931 Japan took Manchuria from China. International reactions condemned this move, but Japan's already strong skepticism against Allied nations meant that it nevertheless carried on. During the Second Sino-Japanese War in 1937, Japan's military invaded central China. Also, in 1938–1939 Japan made an attempt to seize the territory of Soviet Russia and Mongolia, but suffered a serious defeats (see Battle of Lake Khasan, Battles of Khalkhin Gol). By now, relations with the Allied powers were at the bottom, and an international boycott against Japan to deprive it of natural resources was enforced. A military move to gain access to them was deemed necessary, and so Japan attacked Pearl Harbor, bringing the United States to World War II. Using its superior technological advances in naval aviation and its modern doctrines of amphibious and naval warfare, Japan achieved one of the fastest maritime expansions in history. By 1942 Japan had conquered much of East Asia and the Pacific, including the east of China, Hong Kong, Thailand, Vietnam, Cambodia, Burma (Myanmar), Malaysia, the Philippines, Indonesia, part of New Guinea and many islands of the Pacific Ocean. Just as Japan's late industrialization success and victory against the Russian Empire was seen as an example among underdeveloped Asia-Pacific nations, the Japanese took advantage of this and promoted among its conquered the goal to jointly create an anti-European "Greater East Asia Co-Prosperity Sphere". This plan helped the Japanese gain support from native populations during its conquests especially in Indonesia. However, the United States had a vastly stronger military and industrial base and defeated Japan, stripping it of conquests and returning its settlers back to Japan. The most notable example of Dutch imperialism is regarding Indonesia. The Ottoman Empire was an imperial state that lasted from 1299 to 1922. In 1453, Mehmed the Conqueror captured Constantinople and made it his capital. During the 16th and 17th centuries, in particular at the height of its power under the reign of Suleiman the Magnificent, the Ottoman Empire was a powerful multinational, multilingual empire, which invaded and colonized much of Southeast Europe, Western Asia, the Caucasus, North Africa, and the Horn of Africa. Its repeated invasions, and brutal treatment of Slavs led to the Great Migrations of the Serbs to escape persecution. At the beginning of the 17th century the empire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the empire, while others were granted various types of autonomy during the course of centuries. Following a long period of military setbacks against European powers, the Ottoman Empire gradually declined, losing control of much of its territory in Europe and Africa. By 1810 Egypt was effectively independent. In 1821–1829 the Greeks in the Greek War of Independence were assisted by Russia, Britain and France. In 1815 to 1914 the Ottoman Empire could exist only in the conditions of acute rivalry of the great powers, with Britain its main supporter, especially in the Crimean war 1853–1856, against Russia. After Ottoman defeat in the Russo-Turkish War (1877–1878), Bulgaria, Serbia and Montenegro gained independence and Britain took colonial control of Cyprus, while Bosnia and Herzegovina were occupied and annexed by Austro-Hungarian Empire in 1908. The empire allied with Germany in World War I with the imperial ambition of recovering its lost territories, but it dissolved in the aftermath of its decisive defeat. The Kemalist national movement, supported by Soviet Russia, achieved victory in the course of the Turkish War of Independence, and the parties signed and ratified the Treaty of Lausanne in 1923 and 1924. The Republic of Turkey was established. By the 18th century, the Russian Empire extended its control to the Pacific, peacefully forming a common border with the Qing Empire and Empire of Japan. This took place in a large number of military invasions of the lands east, west, and south of it. The Polish–Russian War of 1792 took place after Polish nobility from the Polish–Lithuanian Commonwealth wrote the Constitution of 3 May 1791. The war resulted in eastern Poland being conquered by Imperial Russia as a colony until 1918. The southern campaigns involved a series of Russo-Persian Wars, which began with the Persian Expedition of 1796, resulting in the acquisition of Georgia as a protectorate. Between 1800 and 1864, Imperial armies invaded south in the Russian conquest of the Caucasus, the Murid War, and the Russo-Circassian War. This last conflict led to the ethnic cleansing of Circassians from their lands. The Russian conquest of Siberia over the Khanate of Sibir took place in the 16th and 17th centuries, and resulted in the slaughter of various indigenous tribes by Russians, including the Daur, the Koryaks, the Itelmens, Mansi people and the Chukchi. The Russian colonization of Central and Eastern Europe and Siberia and treatment of the resident indigenous peoples has been compared to European colonization of the Americas, with similar negative impacts on the indigenous Siberians as upon the indigenous peoples of the Americas. The extermination of indigenous Siberian tribes was so complete that a relatively small population of only 180,000 are said to exist today. The Russian Empire exploited and suppressed Cossacks hosts during this period, before turning them into the special military estate Sosloviye in the late 18th century. Cossacks were then used in Imperial Russian campaigns against other tribes. The acquisition of Ukraine by Russia commenced in 1654 with the Pereiaslav Agreement. Georgia's accession to Russia in 1783 was marked by the Treaty of Georgievsk. Bolshevik leaders had effectively reestablished a polity with roughly the same extent as that empire by 1921, however with an internationalist ideology: Lenin in particular asserted the right to limited self-determination for national minorities within the new territory. Beginning in 1923, the policy of "Indigenization" [korenizatsiya] was intended to support non-Russians develop their national cultures within a socialist framework. Never formally revoked, it stopped being implemented after 1932. After World War II, the Soviet Union installed socialist regimes modeled on those it had installed in 1919–20 in the old Russian Empire, in areas its forces occupied in Eastern Europe. The Soviet Union and later the People's Republic of China supported revolutionary and communist movements in foreign nations and colonies to advance their own interests, but were not always successful. The USSR provided great assistance to Kuomintang in 1926–1928 in the formation of a unified Chinese government (see Northern Expedition). Although then relations with the USSR deteriorated, but the USSR was the only world power that provided military assistance to China against Japanese aggression in 1937–1941 (see Sino-Soviet Non-Aggression Pact). The victory of the Chinese Communists in the civil war of 1946–1949 relied on the great help of the USSR (see Chinese Civil War). Trotsky, and others, believed that the revolution could only succeed in Russia as part of a world revolution. Lenin wrote extensively on the matter and famously declared that Imperialism was the highest stage of capitalism. However, after Lenin's death, Joseph Stalin established 'socialism in one country' for the Soviet Union, creating the model for subsequent inward looking Stalinist states and purging the early Internationalist elements. The internationalist tendencies of the early revolution would be abandoned until they returned in the framework of a client state in competition with the Americans during the Cold War. In the post-Stalin period in the late 1950s, the new political leader Nikita Khrushchev put pressure on the Soviet-American relations by starting a new wave of anti-imperialist propaganda. In his speech on the UN conference in 1960, he announced the continuation of the war on imperialism, stating that soon the people of different countries will come together and overthrow their imperialist leaders. Although the Soviet Union declared itself anti-imperialist, critics argue that it exhibited traits common to historic empires. Some scholars hold that the Soviet Union was a hybrid entity containing elements common to both multinational empires and nation-states. Some also argued that the USSR practiced colonialism as did other imperial powers and was carrying on the old Russian tradition of expansion and control. Mao Zedong once argued that the Soviet Union had itself become an imperialist power while maintaining a socialist façade. Moreover, the ideas of imperialism were widely spread in action on the higher levels of government. Josip Broz Tito and Milovan Djilas have referred to the Stalinist USSR's foreign policies, such as the occupation and economic exploitations of Eastern Europe and its aggressive and hostile policy towards Yugoslavia as Soviet imperialism. Some Marxists within the Russian Empire and later the USSR, like Sultan Galiev and Vasyl Shakhrai, considered the Soviet regime a renewed version of the Russian imperialism and colonialism. The crushing of the Hungarian Revolution of 1956 and Soviet–Afghan War have been cited as examples. Made up of former colonies itself, the early United States expressed its opposition to Imperialism, at least in a form distinct from its own Manifest Destiny, through policies such as the Monroe Doctrine. However the US may have unsuccessfully attempted to capture Canada in the War of 1812. The United States achieved very significant territorial concessions from Mexico during the Mexican–American War. Beginning in the late 19th and early 20th century, policies such as Theodore Roosevelt’s interventionism in Central America and Woodrow Wilson’s mission to "make the world safe for democracy" changed all this. They were often backed by military force, but were more often affected from behind the scenes. This is consistent with the general notion of hegemony and imperium of historical empires. In 1898, Americans who opposed imperialism created the Anti-Imperialist League to oppose the US annexation of the Philippines and Cuba. One year later, a war erupted in the Philippines causing business, labor and government leaders in the US to condemn America's occupation in the Philippines as they also denounced them for causing the deaths of many Filipinos. American foreign policy was denounced as a "racket" by Smedley Butler, a former American general who had become a spokesman for the far left. At the start of World War II, President Franklin D. Roosevelt was opposed to European colonialism, especially in India. He pulled back when Britain's Winston Churchill demanded that victory in the war be the first priority. Roosevelt expected that the United Nations would take up the problem of decolonization. Some have described the internal strife between various people groups as a form of imperialism or colonialism. This internal form is distinct from informal U.S. imperialism in the form of political and financial hegemony. It also showed difference in the United States' formation of "colonies" abroad. Through the treatment of its indigenous peoples during westward expansion, the United States took on the form of an imperial power prior to any attempts at external imperialism. This internal form of empire has been referred to as "internal colonialism". Participation in the African slave trade and the subsequent treatment of its 12 to 15 million Africans is viewed by some to be a more modern extension of America's "internal colonialism". However, this internal colonialism faced resistance, as external colonialism did, but the anti-colonial presence was far less prominent due to the nearly complete dominance that the United States was able to assert over both indigenous peoples and African-Americans. In a lecture on April 16, 2003, Edward Said described modern imperialism in the United States as an aggressive means of attack towards the contemporary Orient stating that "due to their backward living, lack of democracy and the violation of women’s rights. The western world forgets during this process of converting the other that enlightenment and democracy are concepts that not all will agree upon". Spanish imperialism in the colonial era corresponds with the rise and decline of the Spanish Empire, conventionally recognized as emerging in 1402 with the conquest of the Canary Islands. Following the successes of exploratory maritime voyages conducted during the Age of Discovery, Spain committed considerable financial and military resources towards developing a robust navy capable of conducting large-scale, transatlantic expeditionary operations in order to establish and solidify a firm imperial presence across large portions of North America, South America, and the geographic regions comprising the Caribbean basin. Concomitant with Spanish endorsement and sponsorship of transatlantic expeditionary voyages was the deployment of Conquistadors, which further expanded Spanish imperial boundaries through the acquisition and development of territories and colonies. In congruence with the colonialist activities of competing European imperial powers throughout the 15th – 19th centuries, the Spanish were equally engrossed in extending geopolitical power. The Caribbean basin functioned as a key geographic focal point for advancing Spanish imperialism. Similar to the strategic prioritization Spain placed towards achieving victory in the conquests of the Aztec Empire and Inca Empire, Spain placed equal strategic emphasis on expanding the nation's imperial footprint within the Caribbean basin. Echoing the prevailing ideological perspectives regarding colonialism and imperialism embraced by Spain's European rivals during the colonial era, including the English, French, and the Dutch, the Spanish used colonialism as a means of expanding imperial geopolitical borders and securing the defense of maritime trade routes in the Caribbean basin. While leveraging colonialism in the same geographic operating theater as its imperial rivals, Spain maintained distinct imperial objectives and instituted a unique form of colonialism in support of its imperial agenda. Spain placed significant strategic emphasis on the acquisition, extraction, and exportation of precious metals (primarily gold and silver). A second objective was the evangelization of subjugated indigenous populations residing in mineral-rich and strategically favorable locations. Notable examples of these indigenous groups include the Taίno populations inhabiting Puerto Rico and segments of Cuba. Compulsory labor and slavery were widely institutionalized across Spanish-occupied territories and colonies, with an initial emphasis on directing labor towards mining activity and related methods of procuring semi-precious metals. The emergence of the Encomienda system during the 16th–17th centuries in occupied colonies within the Caribbean basin reflects a gradual shift in imperial prioritization, increasingly focusing on large-scale production and exportation of agricultural commodities. The scope and scale of Spanish participation in imperialism within the Caribbean basin remains a subject of scholarly debate among historians. A fundamental source of contention stems from the inadvertent conflation of theoretical conceptions of imperialism and colonialism. Furthermore, significant variation exists in the definition and interpretation of these terms as expounded by historians, anthropologists, philosophers, and political scientists. Among historians, there is substantial support in favor of approaching imperialism as a conceptual theory emerging during the 18th–19th centuries, particularly within Britain, propagated by key exponents such as Joseph Chamberlain and Benjamin Disraeli. In accordance with this theoretical perspective, the activities of the Spanish in the Caribbean are not components of a preeminent, ideologically-driven form of imperialism. Rather, these activities are more accurately classified as representing a form of colonialism. Further divergence among historians can be attributed to varying theoretical perspectives regarding imperialism that are proposed by emerging academic schools of thought. Noteworthy examples include cultural imperialism, whereby proponents such as John Downing and Annabelle Sreberny-Modammadi define imperialism as "...the conquest and control of one country by a more powerful one." Cultural imperialism signifies the dimensions of the process that go beyond economic exploitation or military force." Moreover, colonialism is understood as "...the form of imperialism in which the government of the colony is run directly by foreigners." In spite of diverging perspectives and the absence of a unilateral scholarly consensus regarding imperialism among historians, within the context of Spanish expansion in the Caribbean basin during the colonial era, imperialism can be interpreted as an overarching ideological agenda that is perpetuated through the institution of colonialism. In this context, colonialism functions as an instrument designed to achieve specific imperialist objectives. Primary sources
[ { "paragraph_id": 0, "text": "Imperialism is the practice, theory or attitude of maintaining or extending power over foreign nations, particularly through expansionism, employing both hard power (military and economic power) and soft power (diplomatic power and cultural imperialism). Imperialism focuses on establishing or maintaining hegemony and a more or less formal empire. While related to the concepts of colonialism, imperialism is a distinct concept that can apply to other forms of expansion and many forms of government.", "title": "" }, { "paragraph_id": 1, "text": "The word imperialism originated from the Latin word imperium, which means supreme power, \"sovereignty\", or simply \"rule\". The word “imperialism” was originally coined in the 19th century to decry Napoleon’s despotic militarism and became common in the current sense in Great Britain during the 1870s, when it was used with a negative connotation. By the end of the 19th century it was being used to describe the behavior of empires at all times and places. Hannah Arendt and Joseph Schumpeter defined imperialism as expansion for the sake of expansion.", "title": "Etymology and usage" }, { "paragraph_id": 2, "text": "Previously, the term had been used to describe what was perceived as Napoleon III's attempts at obtaining political support through foreign military interventions. The term was and is mainly applied to Western and Japanese political and economic dominance, especially in Asia and Africa, in the 19th and 20th centuries. Its precise meaning continues to be debated by scholars. Some writers, such as Edward Said, use the term more broadly to describe any system of domination and subordination organized around an imperial core and a periphery. This definition encompasses both nominal empires and neocolonialism.", "title": "Etymology and usage" }, { "paragraph_id": 3, "text": "The term \"imperialism\" is often conflated with \"colonialism\"; however, many scholars have argued that each has its own distinct definition. Imperialism and colonialism have been used in order to describe one's influence upon a person or group of people. Robert Young writes that imperialism operates from the centre as a state policy and is developed for ideological as well as financial reasons, while colonialism is simply the development for settlement or commercial intentions; however, colonialism still includes invasion. Colonialism in modern usage also tends to imply a degree of geographic separation between the colony and the imperial power. Particularly, Edward Said distinguishes between imperialism and colonialism by stating: \"imperialism involved 'the practice, the theory and the attitudes of a dominating metropolitan center ruling a distant territory', while colonialism refers to the 'implanting of settlements on a distant territory.' Contiguous land empires such as the Russian or Ottoman have traditionally been excluded from discussions of colonialism, though this is beginning to change, since it is accepted that they also sent populations into the territories they ruled.", "title": "Versus colonialism" }, { "paragraph_id": 4, "text": "Imperialism and colonialism both dictate the political and economic advantage over a land and the indigenous populations they control, yet scholars sometimes find it difficult to illustrate the difference between the two. Although imperialism and colonialism focus on the suppression of another, if colonialism refers to the process of a country taking physical control of another, imperialism refers to the political and monetary dominance, either formally or informally. Colonialism is seen to be the architect deciding how to start dominating areas and then imperialism can be seen as creating the idea behind conquest cooperating with colonialism. Colonialism is when the imperial nation begins a conquest over an area and then eventually is able to rule over the areas the previous nation had controlled. Colonialism's core meaning is the exploitation of the valuable assets and supplies of the nation that was conquered and the conquering nation then gaining the benefits from the spoils of the war. The meaning of imperialism is to create an empire, by conquering the other state's lands and therefore increasing its own dominance. Colonialism is the builder and preserver of the colonial possessions in an area by a population coming from a foreign region. Colonialism can completely change the existing social structure, physical structure, and economics of an area; it is not unusual that the characteristics of the conquering peoples are inherited by the conquered indigenous populations. Few colonies remain remote from their mother country. Thus, most will eventually establish a separate nationality or remain under complete control of their mother colony.", "title": "Versus colonialism" }, { "paragraph_id": 5, "text": "The Soviet leader Vladimir Lenin suggested that \"imperialism was the highest form of capitalism, claiming that imperialism developed after colonialism, and was distinguished from colonialism by monopoly capitalism\".", "title": "Versus colonialism" }, { "paragraph_id": 6, "text": "The Age of Imperialism, a time period beginning around 1760, saw European industrializing nations, engaging in the process of colonizing, influencing, and annexing other parts of the world. 19th century episodes included the \"Scramble for Africa.\"", "title": "Age of Imperialism" }, { "paragraph_id": 7, "text": "In the 1970s British historians John Gallagher (1919–1980) and Ronald Robinson (1920–1999) argued that European leaders rejected the notion that \"imperialism\" required formal, legal control by one government over a colonial region. Much more important was informal control of independent areas. According to Wm. Roger Louis, \"In their view, historians have been mesmerized by formal empire and maps of the world with regions colored red. The bulk of British emigration, trade, and capital went to areas outside the formal British Empire. Key to their thinking is the idea of empire 'informally if possible and formally if necessary.'\" Oron Hale says that Gallagher and Robinson looked at the British involvement in Africa where they \"found few capitalists, less capital, and not much pressure from the alleged traditional promoters of colonial expansion. Cabinet decisions to annex or not to annex were made, usually on the basis of political or geopolitical considerations.\"", "title": "Age of Imperialism" }, { "paragraph_id": 8, "text": "Looking at the main empires from 1875 to 1914, there was a mixed record in terms of profitability. At first, planners expected that colonies would provide an excellent captive market for manufactured items. Apart from the Indian subcontinent, this was seldom true. By the 1890s, imperialists saw the economic benefit primarily in the production of inexpensive raw materials to feed the domestic manufacturing sector. Overall, Great Britain did very well in terms of profits from India, especially Mughal Bengal, but not from most of the rest of its empire. According to Indian Economist Utsa Patnaik, the scale of the wealth transfer out of India, between 1765 and 1938, was an estimated $45 Trillion. The Netherlands did very well in the East Indies. Germany and Italy got very little trade or raw materials from their empires. France did slightly better. The Belgian Congo was notoriously profitable when it was a capitalistic rubber plantation owned and operated by King Leopold II as a private enterprise. However, scandal after scandal regarding atrocities in the Congo Free State led the international community to force the government of Belgium to take it over in 1908, and it became much less profitable. The Philippines cost the United States much more than expected because of military action against rebels.", "title": "Age of Imperialism" }, { "paragraph_id": 9, "text": "Because of the resources made available by imperialism, the world's economy grew significantly and became much more interconnected in the decades before World War I, making the many imperial powers rich and prosperous.", "title": "Age of Imperialism" }, { "paragraph_id": 10, "text": "Europe's expansion into territorial imperialism was largely focused on economic growth by collecting resources from colonies, in combination with assuming political control by military and political means. The colonization of India in the mid-18th century offers an example of this focus: there, the \"British exploited the political weakness of the Mughal state, and, while military activity was important at various times, the economic and administrative incorporation of local elites was also of crucial significance\" for the establishment of control over the subcontinent's resources, markets, and manpower. Although a substantial number of colonies had been designed to provide economic profit and to ship resources to home ports in the 17th and 18th centuries, D. K. Fieldhouse suggests that in the 19th and 20th centuries in places such as Africa and Asia, this idea is not necessarily valid:", "title": "Age of Imperialism" }, { "paragraph_id": 11, "text": "Modern empires were not artificially constructed economic machines. The second expansion of Europe was a complex historical process in which political, social and emotional forces in Europe and on the periphery were more influential than calculated imperialism. Individual colonies might serve an economic purpose; collectively no empire had any definable function, economic or otherwise. Empires represented only a particular phase in the ever-changing relationship of Europe with the rest of the world: analogies with industrial systems or investment in real estate were simply misleading.", "title": "Age of Imperialism" }, { "paragraph_id": 12, "text": "During this time, European merchants had the ability to \"roam the high seas and appropriate surpluses from around the world (sometimes peaceably, sometimes violently) and to concentrate them in Europe\".", "title": "Age of Imperialism" }, { "paragraph_id": 13, "text": "European expansion greatly accelerated in the 19th century. To obtain raw materials, Europe expanded imports from other countries and from the colonies. European industrialists sought raw materials such as dyes, cotton, vegetable oils, and metal ores from overseas. Concurrently, industrialization was quickly making Europe the centre of manufacturing and economic growth, driving resource needs.", "title": "Age of Imperialism" }, { "paragraph_id": 14, "text": "Communication became much more advanced during European expansion. With the invention of railroads and telegraphs, it became easier to communicate with other countries and to extend the administrative control of a home nation over its colonies. Steam railroads and steam-driven ocean shipping made possible the fast, cheap transport of massive amounts of goods to and from colonies.", "title": "Age of Imperialism" }, { "paragraph_id": 15, "text": "Along with advancements in communication, Europe also continued to advance in military technology. European chemists made new explosives that made artillery much more deadly. By the 1880s, the machine gun had become a reliable battlefield weapon. This technology gave European armies an advantage over their opponents, as armies in less-developed countries were still fighting with arrows, swords, and leather shields (e.g. the Zulus in Southern Africa during the Anglo-Zulu War of 1879). Some exceptions of armies that managed to get nearly on par with the European expeditions and standards include the Ethiopian armies at the Battle of Adwa, and the Japanese Imperial Army of Japan, but these still relied heavily on weapons imported from Europe and often on European military advisors.", "title": "Age of Imperialism" }, { "paragraph_id": 16, "text": "Anglophone academic studies often base their theories regarding imperialism on the British experience of Empire. The term imperialism was originally introduced into English in its present sense in the late 1870s by opponents of the allegedly aggressive and ostentatious imperial policies of British Prime Minister Benjamin Disraeli. Supporters of \"imperialism\" such as Joseph Chamberlain quickly appropriated the concept. For some, imperialism designated a policy of idealism and philanthropy; others alleged that it was characterized by political self-interest, and a growing number associated it with capitalist greed.", "title": "Theories of imperialism" }, { "paragraph_id": 17, "text": "In Imperialism: A Study (1902), John A. Hobson developed a highly influential interpretation of imperialism that expanded on his belief that free enterprise capitalism had a negative impact on the majority of the population. In Imperialism he argued that the financing of overseas empires drained money that was needed at home. It was invested abroad because of lower wages paid to the workers overseas made for higher profits and higher rates of return, compared to domestic wages. So although domestic wages remained higher, they did not grow nearly as fast as they might have otherwise. Exporting capital, he concluded, put a lid on the growth of domestic wages in the domestic standard of living. By the 1970s, historians such as David K. Fieldhouse and Oron Hale could argue that \"the Hobsonian foundation has been almost completely demolished.\" The British experience failed to support it. However, European Marxists picked up Hobson's ideas and made it into their own theory of imperialism, most notably in Vladimir Lenin's Imperialism, the Highest Stage of Capitalism (1916). Lenin portrayed Imperialism as the closure of the world market and the end of capitalist free-competition that arose from the need for capitalist economies to constantly expand investment, material resources and manpower in such a way that necessitated colonial expansion. Later Marxist theoreticians echo this conception of imperialism as a structural feature of capitalism, which explained the World War as the battle between imperialists for control of external markets. Lenin's treatise became a standard textbook that flourished until the collapse of communism in 1989–91.", "title": "Theories of imperialism" }, { "paragraph_id": 18, "text": "Some theoreticians on the non-Communist left have emphasized the structural or systemic character of \"imperialism\". Such writers have expanded the period associated with the term so that it now designates neither a policy, nor a short space of decades in the late 19th century, but a world system extending over a period of centuries, often going back to Colonization and, in some accounts, to the Crusades. As the application of the term has expanded, its meaning has shifted along five distinct but often parallel axes: the moral, the economic, the systemic, the cultural, and the temporal. Those changes reflect—among other shifts in sensibility—a growing unease, even great distaste, with the pervasiveness of such power, specifically, Western power.", "title": "Theories of imperialism" }, { "paragraph_id": 19, "text": "Historians and political theorists have long debated the correlation between capitalism, class and imperialism. Much of the debate was pioneered by such theorists as J. A. Hobson (1858–1940), Joseph Schumpeter (1883–1950), Thorstein Veblen (1857–1929), and Norman Angell (1872–1967). While these non-Marxist writers were at their most prolific before World War I, they remained active in the interwar years. Their combined work informed the study of imperialism and its impact on Europe, as well as contributing to reflections on the rise of the military-political complex in the United States from the 1950s. Hobson argued that domestic social reforms could cure the international disease of imperialism by removing its economic foundation. Hobson theorized that state intervention through taxation could boost broader consumption, create wealth, and encourage a peaceful, tolerant, multipolar world order.", "title": "Theories of imperialism" }, { "paragraph_id": 20, "text": "Walter Rodney, in his 1972 How Europe Underdeveloped Africa, proposes the idea that imperialism is a phase of capitalism \"in which Western European capitalist countries, the US, and Japan established political, economic, military and cultural hegemony over other parts of the world which were initially at a lower level and therefore could not resist domination.\" As a result, Imperialism \"for many years embraced the whole world – one part being the exploiters and the other the exploited, one part being dominated and the other acting as overlords, one part making policy and the other being dependent.\"", "title": "Theories of imperialism" }, { "paragraph_id": 21, "text": "Imperialism has also been identified in newer phenomena like space development and its governing context.", "title": "Theories of imperialism" }, { "paragraph_id": 22, "text": "Imperial control, territorial and cultural, is justified through discourses about the imperialists' understanding of different spaces. Conceptually, imagined geographies explain the limitations of the imperialist understanding of the societies of the different spaces inhabited by the non–European Other.", "title": "Issues" }, { "paragraph_id": 23, "text": "In Orientalism (1978), Edward Said said that the West developed the concept of The Orient—an imagined geography of the Eastern world—which functions as an essentializing discourse that represents neither the ethnic diversity nor the social reality of the Eastern world. That by reducing the East into cultural essences, the imperial discourse uses place-based identities to create cultural difference and psychologic distance between \"We, the West\" and \"They, the East\" and between \"Here, in the West\" and \"There, in the East\".", "title": "Issues" }, { "paragraph_id": 24, "text": "That cultural differentiation was especially noticeable in the books and paintings of early Oriental studies, the European examinations of the Orient, which misrepresented the East as irrational and backward, the opposite of the rational and progressive West. Defining the East as a negative vision of the Western world, as its inferior, not only increased the sense-of-self of the West, but also was a way of ordering the East, and making it known to the West, so that it could be dominated and controlled. Therefore, Orientalism was the ideological justification of early Western imperialism—a body of knowledge and ideas that rationalized social, cultural, political, and economic control of other, non-white peoples.", "title": "Issues" }, { "paragraph_id": 25, "text": "One of the main tools used by imperialists was cartography. Cartography is \"the art, science and technology of making maps\" but this definition is problematic. It implies that maps are objective representations of the world when in reality they serve very political means. For Harley, maps serve as an example of Foucault's power and knowledge concept.", "title": "Issues" }, { "paragraph_id": 26, "text": "To better illustrate this idea, Bassett focuses his analysis of the role of 19th-century maps during the \"Scramble for Africa\". He states that maps \"contributed to empire by promoting, assisting, and legitimizing the extension of French and British power into West Africa\". During his analysis of 19th-century cartographic techniques, he highlights the use of blank space to denote unknown or unexplored territory. This provided incentives for imperial and colonial powers to obtain \"information to fill in blank spaces on contemporary maps\".", "title": "Issues" }, { "paragraph_id": 27, "text": "Although cartographic processes advanced through imperialism, further analysis of their progress reveals many biases linked to eurocentrism. According to Bassett, \"[n]ineteenth-century explorers commonly requested Africans to sketch maps of unknown areas on the ground. Many of those maps were highly regarded for their accuracy\" but were not printed in Europe unless Europeans verified them.", "title": "Issues" }, { "paragraph_id": 28, "text": "Imperialism in pre-modern times was common in the form of expansionism through vassalage and conquest.", "title": "Issues" }, { "paragraph_id": 29, "text": "The concept of cultural imperialism refers to the cultural influence of one dominant culture over others, i.e. a form of soft power, which changes the moral, cultural, and societal worldview of the subordinate culture. This means more than just \"foreign\" music, television or film becoming popular with young people; rather that a populace changes its own expectations of life, desiring for their own country to become more like the foreign country depicted. For example, depictions of opulent American lifestyles in the soap opera Dallas during the Cold War changed the expectations of Romanians; a more recent example is the influence of smuggled South Korean drama-series in North Korea. The importance of soft power is not lost on authoritarian regimes, which may oppose such influence with bans on foreign popular culture, control of the internet and of unauthorized satellite dishes, etc. Nor is such a usage of culture recent – as part of Roman imperialism, local elites would be exposed to the benefits and luxuries of Roman culture and lifestyle, with the aim that they would then become willing participants.", "title": "Issues" }, { "paragraph_id": 30, "text": "Imperialism has been subject to moral or immoral censure by its critics, and thus the term \"imperialism\" is frequently used in international propaganda as a pejorative for expansionist and aggressive foreign policy.", "title": "Issues" }, { "paragraph_id": 31, "text": "An empire mentality may build on and bolster views contrasting \"primitive\" and \"advanced\" peoples and cultures, thus justifying and encouraging imperialist practices among participants. Associated psychological tropes include the White Man's Burden and the idea of civilizing mission (French: mission civilatrice).", "title": "Issues" }, { "paragraph_id": 32, "text": "The political concept social imperialism is a Marxist expression first used in the early 20th century by Lenin as \"socialist in words, imperialist in deeds\" describing the Fabian Society and other socialist organizations. Later, in a split with the Soviet Union, Mao Zedong criticized its leaders as social imperialists.", "title": "Issues" }, { "paragraph_id": 33, "text": "Stephen Howe has summarized his view on the beneficial effects of the colonial empires:", "title": "Justification" }, { "paragraph_id": 34, "text": "At least some of the great modern empires – the British, French, Austro-Hungarian, Russian, and even the Ottoman – have virtues that have been too readily forgotten. They provided stability, security, and legal order for their subjects. They constrained, and at their best, tried to transcend, the potentially savage ethnic or religious antagonisms among the peoples. And the aristocracies which ruled most of them were often far more liberal, humane, and cosmopolitan than their supposedly ever more democratic successors.", "title": "Justification" }, { "paragraph_id": 35, "text": "A controversial aspect of imperialism is the defense and justification of empire-building based on seemingly rational grounds. In ancient China, Tianxia denoted the lands, space, and area divinely appointed to the Emperor by universal and well-defined principles of order. The center of this land was directly apportioned to the Imperial court, forming the center of a world view that centered on the Imperial court and went concentrically outward to major and minor officials and then the common citizens, tributary states, and finally ending with the fringe \"barbarians\". Tianxia's idea of hierarchy gave Chinese a privileged position and was justified through the promise of order and peace. J. A. Hobson identifies this justification on general grounds as: \"It is desirable that the earth should be peopled, governed, and developed, as far as possible, by the races which can do this work best, i.e. by the races of highest 'social efficiency'\". Many others argued that imperialism is justified for several different reasons. Friedrich Ratzel believed that in order for a state to survive, imperialism was needed. Halford Mackinder felt that Great Britain needed to be one of the greatest imperialists and therefore justified imperialism. The purportedly scientific nature of \"Social Darwinism\" and a theory of races formed a supposedly rational justification for imperialism. Under this doctrine, the French politician Jules Ferry could declare in 1883 that \"Superior races have a right, because they have a duty. They have the duty to civilize the inferior races.\" The rhetoric of colonizers being racially superior appears to have achieved its purpose, for example throughout Latin America \"whiteness\" is still prized today and various forms of blanqueamiento (whitening) are common.", "title": "Justification" }, { "paragraph_id": 36, "text": "The Royal Geographical Society of London and other geographical societies in Europe had great influence and were able to fund travelers who would come back with tales of their discoveries. These societies also served as a space for travellers to share these stories. Political geographers such as Friedrich Ratzel of Germany and Halford Mackinder of Britain also supported imperialism. Ratzel believed expansion was necessary for a state's survival while Mackinder supported Britain's imperial expansion; these two arguments dominated the discipline for decades.", "title": "Justification" }, { "paragraph_id": 37, "text": "Geographical theories such as environmental determinism also suggested that tropical environments created uncivilized people in need of European guidance. For instance, American geographer Ellen Churchill Semple argued that even though human beings originated in the tropics they were only able to become fully human in the temperate zone. Tropicality can be paralleled with Edward Said's Orientalism as the west's construction of the east as the \"other\". According to Said, orientalism allowed Europe to establish itself as the superior and the norm, which justified its dominance over the essentialized Orient.", "title": "Justification" }, { "paragraph_id": 38, "text": "Technology and economic efficiency were often improved in territories subjected to imperialism through the building of roads, other infrastructure and introduction of new technologies.", "title": "Justification" }, { "paragraph_id": 39, "text": "The principles of imperialism are often generalizable to the policies and practices of the British Empire \"during the last generation, and proceeds rather by diagnosis than by historical description\". British imperialism in some sparsely-inhabited regions appears to have applied a principle now termed Terra nullius (Latin expression which stems from Roman law meaning 'no man's land'). The country of Australia serves as a case study in relation to British settlement and colonial rule of the continent in the 18th century, that was arguably premised on terra nullius, as its settlers considered it unused by its original inhabitants.", "title": "Justification" }, { "paragraph_id": 40, "text": "The concept of environmental determinism served as a moral justification for the domination of certain territories and peoples. The environmental determinist school of thought held that the environment in which certain people lived determined those persons' behaviours; and thus validated their domination. For example, the Western world saw people living in tropical environments as \"less civilized\", therefore justifying colonial control as a civilizing mission. Across the three major waves of European colonialism (the first in the Americas, the second in Asia and the last in Africa), environmental determinism served to place categorically indigenous people in a racial hierarchy. This takes two forms, orientalism and tropicality.", "title": "Justification" }, { "paragraph_id": 41, "text": "Some geographic scholars under colonizing empires divided the world into climatic zones. These scholars believed that Northern Europe and the Mid-Atlantic temperate climate produced a hard-working, moral, and upstanding human being. In contrast, tropical climates allegedly yielded lazy attitudes, sexual promiscuity, exotic culture, and moral degeneracy. The people of these climates were believed to be in need of guidance and intervention from a European empire to aid in the governing of a more evolved social structure; they were seen as incapable of such a feat. Similarly, orientalism could promote a view of a people based on their geographical location.", "title": "Justification" }, { "paragraph_id": 42, "text": "Anti-imperialism gained a wide currency after the Second World War and at the onset of the Cold War as political movements in colonies of European powers promoted national sovereignty. Some anti-imperialist groups who opposed the United States supported the power of the Soviet Union, such as in Guevarism, while in Maoism this was criticized as social imperialism.", "title": "Anti-imperialism" }, { "paragraph_id": 43, "text": "Pan-Africanism is a movement across Africa and the world that came as a result of imperial ideas splitting apart African nations and pitting them against each other. The Pan-African movement instead tried to reverse those ideas by uniting Africans and creating a sense of brotherhood among all African people. The Pan-African movement helped with the eventual end of Colonialism in Africa.", "title": "Anti-imperialism" }, { "paragraph_id": 44, "text": "Representatives at the 1900 Pan African Conference demanded moderate reforms for colonial African nations. The conference also discussed African populations in the Caribbean and the United States and their rights. There was a total of 6 Pan-African conferences that were held and these allowed the African people to have a voice in ending colonial rule.", "title": "Anti-imperialism" }, { "paragraph_id": 45, "text": "The Roman Empire was the post-Republican period of ancient Rome. As a polity, it included large territorial holdings around the Mediterranean Sea in Europe, North Africa, and Western Asia, ruled by emperors.", "title": "Imperialism by country" }, { "paragraph_id": 46, "text": "England's imperialist ambitions can be seen as early as the 16th century as the Tudor conquest of Ireland began in the 1530s. In 1599 the British East India Company was established and was chartered by Queen Elizabeth in the following year. With the establishment of trading posts in India, the British were able to maintain strength relative to other empires such as the Portuguese who already had set up trading posts in India.", "title": "Imperialism by country" }, { "paragraph_id": 47, "text": "Between 1621 and 1699, the Kingdom of Scotland authorised several colonies in the Americas. Most of these colonies were either aborted or collapsed quickly for various reasons.", "title": "Imperialism by country" }, { "paragraph_id": 48, "text": "Under the Acts of Union 1707, the English and Scottish kingdoms were merged, and their colonies collectively became subject to Great Britain (also known as the United Kingdom). The empire Great Britain would go on to found was the largest empire that the world has ever seen both in terms of landmass and population. Its power, both military and economic, remained unmatched for a few decades.", "title": "Imperialism by country" }, { "paragraph_id": 49, "text": "In 1767, the Anglo-Mysore Wars and other political activity caused exploitation of the East India Company causing the plundering of the local economy, almost bringing the company into bankruptcy. By the year 1670 Britain's imperialist ambitions were well off as she had colonies in Virginia, Massachusetts, Bermuda, Honduras, Antigua, Barbados, Jamaica and Nova Scotia. Due to the vast imperialist ambitions of European countries, Britain had several clashes with France. This competition was evident in the colonization of what is now known as Canada. John Cabot claimed Newfoundland for the British while the French established colonies along the St. Lawrence River and claiming it as \"New France\". Britain continued to expand by colonizing countries such as New Zealand and Australia, both of which were not empty land as they had their own locals and cultures. Britain's nationalistic movements were evident with the creation of the commonwealth countries where there was a shared nature of national identity.", "title": "Imperialism by country" }, { "paragraph_id": 50, "text": "Following the proto-industrialization, the \"First\" British Empire was based on mercantilism, and involved colonies and holdings primarily in North America, the Caribbean, and India. Its growth was reversed by the loss of the American colonies in 1776. Britain made compensating gains in India, Australia, and in constructing an informal economic empire through control of trade and finance in Latin America after the independence of Spanish and Portuguese colonies in about 1820. By the 1840s, Britain had adopted a highly successful policy of free trade that gave it dominance in the trade of much of the world. After losing its first Empire to the Americans, Britain then turned its attention towards Asia, Africa, and the Pacific. Following the defeat of Napoleonic France in 1815, Britain enjoyed a century of almost unchallenged dominance and expanded its imperial holdings around the globe. Unchallenged at sea, British dominance was later described as Pax Britannica (\"British Peace\"), a period of relative peace in Europe and the world (1815–1914) during which the British Empire became the global hegemon and adopted the role of global policeman. However, this peace was mostly a perceived one from Europe, and the period was still an almost uninterrupted series of colonial wars and disputes. The British Conquest of India, its intervention against Mehemet Ali, the Anglo-Burmese Wars, the Crimean War, the Opium Wars and the Scramble for Africa to name the most notable conflicts mobilised ample military means to press Britain's lead in the global conquest Europe led across the century.", "title": "Imperialism by country" }, { "paragraph_id": 51, "text": "In the early 19th century, the Industrial Revolution began to transform Britain; by the time of the Great Exhibition in 1851 the country was described as the \"workshop of the world\". The British Empire expanded to include India, large parts of Africa and many other territories throughout the world. Alongside the formal control it exerted over its own colonies, British dominance of much of world trade meant that it effectively controlled the economies of many regions, such as Asia and Latin America. Domestically, political attitudes favoured free trade and laissez-faire policies and a gradual widening of the voting franchise. During this century, the population increased at a dramatic rate, accompanied by rapid urbanisation, causing significant social and economic stresses. To seek new markets and sources of raw materials, the Conservative Party under Disraeli launched a period of imperialist expansion in Egypt, South Africa, and elsewhere. Canada, Australia, and New Zealand became self-governing dominions.", "title": "Imperialism by country" }, { "paragraph_id": 52, "text": "A resurgence came in the late 19th century with the Scramble for Africa and major additions in Asia and the Middle East. The British spirit of imperialism was expressed by Joseph Chamberlain and Lord Rosebury, and implemented in Africa by Cecil Rhodes. The pseudo-sciences of Social Darwinism and theories of race formed an ideological underpinning and legitimation during this time. Other influential spokesmen included Lord Cromer, Lord Curzon, General Kitchener, Lord Milner, and the writer Rudyard Kipling. After the First Boer War, the South African Republic and Orange Free State were recognised by Britain but eventually re-annexed after the Second Boer War. But British power was fading, as the reunited German state founded by the Kingdom of Prussia posed a growing threat to Britain's dominance. As of 1913, Britain was the world's fourth economy, behind the U.S, Russia and Germany.", "title": "Imperialism by country" }, { "paragraph_id": 53, "text": "Irish War of Independence in 1919–1921 led to the сreation of the Irish Free State. But Britain gained control of former German and Ottoman colonies with the League of Nations mandate. Britain now had a practically continuous line of controlled territories from Egypt to Burma and another one from Cairo to Cape Town. However, this period was also the one of the emergence of independence movements based on nationalism and new experiences the colonists had gained in the war.", "title": "Imperialism by country" }, { "paragraph_id": 54, "text": "World War II decisively weakened Britain's position in the world, especially financially. Decolonization movements arose nearly everywhere in the Empire, resulting in Indian independence and partition in 1947, the self-governing dominions break away from the empire in 1949, and the establishment of independent states in the 1950s. British imperialism showed its frailty in Egypt during the Suez Crisis in 1956. However, with the United States and Soviet Union emerging from World War II as the sole superpowers, Britain's role as a worldwide power declined significantly and rapidly.", "title": "Imperialism by country" }, { "paragraph_id": 55, "text": "In Canada, the \"imperialism\" (and the related term \"colonialism\") has had a variety of contradictory meanings since the 19th century. In the late 19th and early 20th, to be an \"imperialist\" meant thinking of Canada as a part of the British nation not a separate nation. The older words for the same concepts were \"loyalism\" or \"unionism\", which continued to be used as well. In mid-twentieth Canada, the world \"imperialism\" and \"colonialism\" were used in English Canadian discourse to instead portray Canada as a victim of economic and cultural penetration by the United States. In twentieth century French-Canadian discourse the \"imperialists\" were all the Anglo-Saxon countries including Canada who were oppressing French-speakers and the province of Quebec. By the early 21st century, \"colonialism\" was used to highlight supposed anti-indigenous attitudes and actions of Canada inherited from the British period.", "title": "Imperialism by country" }, { "paragraph_id": 56, "text": "China was one of the world's oldest empires. Due to its long history of imperialist expansion, China has been seen by its neighboring countries as a threat due to its large population, giant economy, large military force as well as its territorial evolution throughout history. Starting with the unification of China under the Qin dynasty, later Chinese dynasties continued to follow its form of expansions.", "title": "Imperialism by country" }, { "paragraph_id": 57, "text": "The most successful Chinese imperial dynasties in terms of territorial expansion were the Han, Tang, Yuan, and Qing dynasties.", "title": "Imperialism by country" }, { "paragraph_id": 58, "text": "Denmark–Norway (Denmark after 1814) possessed overseas colonies from 1536 until 1953. At its apex there were colonies on four continents: Europe, North America, Africa and Asia. In the 17th century, following territorial losses on the Scandinavian Peninsula, Denmark-Norway began to develop colonies, forts, and trading posts in West Africa, the Caribbean, and the Indian subcontinent. Christian IV first initiated the policy of expanding Denmark-Norway's overseas trade, as part of the mercantilist wave that was sweeping Europe. Denmark-Norway's first colony was established at Tranquebar on India's southern coast in 1620. Admiral Ove Gjedde led the expedition that established the colony. After 1814, when Norway was ceded to Sweden, Denmark retained what remained of Norway's great medieval colonial holdings. One by one the smaller colonies were lost or sold. Tranquebar was sold to the British in 1845. The United States purchased the Danish West Indies in 1917. Iceland became independent in 1944. Today, the only remaining vestiges are two originally Norwegian colonies that are currently within the Danish Realm, the Faroe Islands and Greenland; the Faroes were a Danish county until 1948, while Greenland's colonial status ceased in 1953. They are now autonomous territories.", "title": "Imperialism by country" }, { "paragraph_id": 59, "text": "During the 16th century, the French colonization of the Americas began with the creation of New France. It was followed by French East India Company's trading posts in Africa and Asia in the 17th century. France had its \"First colonial empire\" from 1534 until 1814, including New France (Canada, Acadia, Newfoundland and Louisiana), French West Indies (Saint-Domingue, Guadeloupe, Martinique), French Guiana, Senegal (Gorée), Mascarene Islands (Mauritius Island, Réunion) and French India.", "title": "Imperialism by country" }, { "paragraph_id": 60, "text": "Its \"Second colonial empire\" began with the seizure of Algiers in 1830 and came for the most part to an end with the granting of independence to Algeria in 1962. The French imperial history was marked by numerous wars, large and small, and also by significant help to France itself from the colonials in the world wars. France took control of Algeria in 1830 but began in earnest to rebuild its worldwide empire after 1850, concentrating chiefly in North and West Africa (French North Africa, French West Africa, French Equatorial Africa), as well as South-East Asia (French Indochina), with other conquests in the South Pacific (New Caledonia, French Polynesia). France also twice attempted to make Mexico a colony in 1838–39 and in 1861–67 (see Pastry War and Second French intervention in Mexico).", "title": "Imperialism by country" }, { "paragraph_id": 61, "text": "French Republicans, at first hostile to empire, only became supportive when Germany started to build her own colonial empire. As it developed, the new empire took on roles of trade with France, supplying raw materials and purchasing manufactured items, as well as lending prestige to the motherland and spreading French civilization and language as well as Catholicism. It also provided crucial manpower in both World Wars. It became a moral justification to lift the world up to French standards by bringing Christianity and French culture. In 1884 the leading exponent of colonialism, Jules Ferry declared France had a civilising mission: \"The higher races have a right over the lower races, they have a duty to civilize the inferior\". Full citizenship rights – assimilation – were offered, although in reality assimilation was always on the distant horizon. Contrasting from Britain, France sent small numbers of settlers to its colonies, with the only notable exception of Algeria, where French settlers nevertheless always remained a small minority.", "title": "Imperialism by country" }, { "paragraph_id": 62, "text": "The French colonial empire of extended over 11,500,000 km (4,400,000 sq mi) at its height in the 1920s and had a population of 110 million people on the eve of World War II.", "title": "Imperialism by country" }, { "paragraph_id": 63, "text": "In World War II, Charles de Gaulle and the Free French used the overseas colonies as bases from which they fought to liberate France. However, after 1945 anti-colonial movements began to challenge the Empire. France fought and lost a bitter war in Vietnam in the 1950s. Whereas they won the war in Algeria, de Gaulle decided to grant Algeria independence anyway in 1962. French settlers and many local supporters relocated to France. Nearly all of France's colonies gained independence by 1960, but France retained great financial and diplomatic influence. It has repeatedly sent troops to assist its former colonies in Africa in suppressing insurrections and coups d'état.", "title": "Imperialism by country" }, { "paragraph_id": 64, "text": "French colonial officials, influenced by the revolutionary ideal of equality, standardized schools, curricula, and teaching methods as much as possible. They did not establish colonial school systems with the idea of furthering the ambitions of the local people, but rather simply exported the systems and methods in vogue in the mother nation. Having a moderately trained lower bureaucracy was of great use to colonial officials. The emerging French-educated indigenous elite saw little value in educating rural peoples. After 1946 the policy was to bring the best students to Paris for advanced training. The result was to immerse the next generation of leaders in the growing anti-colonial diaspora centered in Paris. Impressionistic colonials could mingle with studious scholars or radical revolutionaries or so everything in between. Ho Chi Minh and other young radicals in Paris formed the French Communist party in 1920.", "title": "Imperialism by country" }, { "paragraph_id": 65, "text": "Tunisia was exceptional. The colony was administered by Paul Cambon, who built an educational system for colonists and indigenous people alike that was closely modeled on mainland France. He emphasized female and vocational education. By independence, the quality of Tunisian education nearly equalled that in France.", "title": "Imperialism by country" }, { "paragraph_id": 66, "text": "African nationalists rejected such a public education system, which they perceived as an attempt to retard African development and maintain colonial superiority. One of the first demands of the emerging nationalist movement after World War II was the introduction of full metropolitan-style education in French West Africa with its promise of equality with Europeans.", "title": "Imperialism by country" }, { "paragraph_id": 67, "text": "In Algeria, the debate was polarized. The French set up schools based on the scientific method and French culture. The Pied-Noir (Catholic migrants from Europe) welcomed this. Those goals were rejected by the Moslem Arabs, who prized mental agility and their distinctive religious tradition. The Arabs refused to become patriotic and cultured Frenchmen and a unified educational system was impossible until the Pied-Noir and their Arab allies went into exile after 1962.", "title": "Imperialism by country" }, { "paragraph_id": 68, "text": "In South Vietnam from 1955 to 1975 there were two competing powers in education, as the French continued their work and the Americans moved in. They sharply disagreed on goals. The French educators sought to preserving French culture among the Vietnamese elites and relied on the Mission Culturelle – the heir of the colonial Direction of Education – and its prestigious high schools. The Americans looked at the great mass of people and sought to make South Vietnam a nation strong enough to stop communism. The Americans had far more money, as USAID coordinated and funded the activities of expert teams, and particularly of academic missions. The French deeply resented the American invasion of their historical zone of cultural imperialism.", "title": "Imperialism by country" }, { "paragraph_id": 69, "text": "German expansion into Slavic lands begins in the 12th–13th-century (see Drang Nach Osten). The concept of Drang Nach Osten was a core element of German nationalism and a major element of Nazi ideology. However, the German involvement in the seizure of overseas territories was negligible until the end of the 19th century. Prussia unified the other states into the second German Empire in 1871. Its Chancellor, Otto von Bismarck (1862–90), long opposed colonial acquisitions, arguing that the burden of obtaining, maintaining, and defending such possessions would outweigh any potential benefits. He felt that colonies did not pay for themselves, that the German bureaucratic system would not work well in the tropics and the diplomatic disputes over colonies would distract Germany from its central interest, Europe itself.", "title": "Imperialism by country" }, { "paragraph_id": 70, "text": "However, public opinion and elite opinion in Germany demanded colonies for reasons of international prestige, so Bismarck was forced to oblige. In 1883–84 Germany began to build a colonial empire in Africa and the South Pacific. The establishment of the German colonial empire started with German New Guinea in 1884. Within 25 years, German South West Africa had committed the Herero and Namaqua genocide in modern-day Namibia, the first genocide of the 20th century.", "title": "Imperialism by country" }, { "paragraph_id": 71, "text": "German colonies included the present territories of in Africa: Tanzania, Rwanda, Burundi, Namibia, Cameroon, Ghana and Togo; in Oceania: New Guinea, Solomon islands, Nauru, Marshall Islands, Mariana Islands, Caroline Islands and Samoa; and in Asia: Qingdao, Yantai and the Jiaozhou Bay. The Treaty of Versailles made them mandates temporarily operated by the Allied victors. Germany also lost part of the Eastern territories that became part of independent Poland as a result of the Treaty of Versailles in 1919. Finally, the Eastern territories captured in the Middle Ages were torn from Germany and became part of Poland and the USSR as a result of the territorial reorganization established by the Potsdam Conference of the great powers in 1945.", "title": "Imperialism by country" }, { "paragraph_id": 72, "text": "The Italian Empire (Impero italiano) comprised the overseas possessions of the Kingdom of Italy primarily in northeast Africa. It began with the purchase in 1869 of Assab Bay on the Red Sea by an Italian navigation company which intended to establish a coaling station at the time the Suez Canal was being opened to navigation. This was taken over by the Italian government in 1882, becoming modern Italy's first overseas territory. By the start of the First World War in 1914, Italy had acquired in Africa the colony of Eritrea on the Red Sea coast, a large protectorate and later colony in Somalia, and authority in formerly Ottoman Tripolitania and Cyrenaica (gained after the Italo-Turkish War) which were later unified in the colony of Libya.", "title": "Imperialism by country" }, { "paragraph_id": 73, "text": "Outside Africa, Italy possessed the Dodecanese Islands off the coast of Turkey (following the Italo-Turkish War) and a small concession in Tianjin in China following the Boxer War of 1900. During the First World War, Italy occupied southern Albania to prevent it from falling to Austria-Hungary. In 1917, it established a protectorate over Albania, which remained in place until 1920. The Fascist government that came to power with Benito Mussolini in 1922 sought to increase the size of the Italian empire and to satisfy the claims of Italian irredentists.", "title": "Imperialism by country" }, { "paragraph_id": 74, "text": "In its second invasion of Ethiopia in 1935–36, Italy was successful and it merged its new conquest with its older east African colonies to create Italian East Africa. In 1939, Italy invaded Albania and incorporated it into the Fascist state. During the Second World War (1939–1945), Italy occupied British Somaliland, parts of south-eastern France, western Egypt and most of Greece, but then lost those conquests and its African colonies, including Ethiopia, to the invading allied forces by 1943. It was forced in the peace treaty of 1947 to relinquish sovereignty over all its colonies. It was granted a trust to administer former Italian Somaliland under United Nations supervision in 1950. When Somalia became independent in 1960, Italy's eight-decade experiment with colonialism ended.", "title": "Imperialism by country" }, { "paragraph_id": 75, "text": "For over 200 years, Japan maintained a feudal society during a period of relative isolation from the rest of the world. However, in the 1850s, military pressure from the United States and other world powers coerced Japan to open itself to the global market, resulting in an end to the country's isolation. A period of conflicts and political revolutions followed due to socioeconomic uncertainty, ending in 1868 with the reunification of political power under the Japanese Emperor during the Meiji Restoration. This sparked a period of rapid industrialization driven in part by a Japanese desire for self-sufficiency. By the early 1900s, Japan was a naval power that could hold its own against an established European power as it defeated Russia.", "title": "Imperialism by country" }, { "paragraph_id": 76, "text": "Despite its rising population and increasingly industrialized economy, Japan lacked significant natural resources. As a result, the country turned to imperialism and expansionism in part as a means of compensating for these shortcomings, adopting the national motto \"Fukoku kyōhei\" (富国強兵, \"Enrich the state, strengthen the military\").", "title": "Imperialism by country" }, { "paragraph_id": 77, "text": "And Japan was eager to take every opportunity. In 1869 they took advantage of the defeat of the rebels of the Republic of Ezo to incorporate definitely the island of Hokkaido to Japan. For centuries, Japan viewed the Ryukyu Islands as one of its provinces. In 1871 the Mudan incident happened: Taiwanese aborigines murdered 54 Ryūkyūan sailors that became shipwrecked. At that time the Ryukyu Islands were claimed by both Qing China and Japan, and the Japanese interpreted the incident as an attack on their citizens. They took steps to bring the islands in their jurisdiction: in 1872 the Japanese Ryukyu Domain was declared, and in 1874 a retaliatory incursion to Taiwan was sent, which was a success. The success of this expedition emboldened the Japanese: not even the Americans could defeat the Taiwanese in the Formosa Expedition of 1867. Very few gave it much thought at the time, but this was the first move in the Japanese expansionism series. Japan occupied Taiwan for the rest of 1874 and then left owing to Chinese pressures, but in 1879 it finally annexed the Ryukyu Islands. In 1875 Qing China sent a 300-men force to subdue the Taiwanese, but unlike the Japanese the Chinese were routed, ambushed and 250 of their men were killed; the failure of this expedition exposed once more the failure of Qing China to exert effective control in Taiwan, and acted as another incentive for the Japanese to annex Taiwan. Eventually, the spoils for winning the First Sino-Japanese War in 1894 included Taiwan.", "title": "Imperialism by country" }, { "paragraph_id": 78, "text": "In 1875 Japan took its first operation against Joseon Korea, another territory that for centuries it coveted; the Ganghwa Island incident made Korea open to international trade. Korea was annexed in 1910. As a result of winning the Russo-Japanese War in 1905, Japan took part of Sakhalin Island from Russia. Precisely, the victory against the Russian Empire shook the world: never before had an Asian nation defeated a European power, and in Japan it was seen as a feat. Japan's victory against Russia would act as an antecedent for Asian countries in the fight against the Western powers for Decolonization. During World War I, Japan took German-leased territories in China's Shandong Province, as well as the Mariana, Caroline, and Marshall Islands, and kept the islands as League of nations mandates. At first, Japan was in good standing with the victorious Allied powers of World War I, but different discrepancies and dissatisfaction with the rewards of the treaties cooled the relations with them, for example American pressure forced it to return the Shandong area. By the '30s, economic depression, urgency of resources and a growing distrust in the Allied powers made Japan lean to a hardened militaristic stance. Through the decade, it would grow closer to Germany and Italy, forming together the Axis alliance. In 1931 Japan took Manchuria from China. International reactions condemned this move, but Japan's already strong skepticism against Allied nations meant that it nevertheless carried on.", "title": "Imperialism by country" }, { "paragraph_id": 79, "text": "During the Second Sino-Japanese War in 1937, Japan's military invaded central China. Also, in 1938–1939 Japan made an attempt to seize the territory of Soviet Russia and Mongolia, but suffered a serious defeats (see Battle of Lake Khasan, Battles of Khalkhin Gol). By now, relations with the Allied powers were at the bottom, and an international boycott against Japan to deprive it of natural resources was enforced. A military move to gain access to them was deemed necessary, and so Japan attacked Pearl Harbor, bringing the United States to World War II. Using its superior technological advances in naval aviation and its modern doctrines of amphibious and naval warfare, Japan achieved one of the fastest maritime expansions in history. By 1942 Japan had conquered much of East Asia and the Pacific, including the east of China, Hong Kong, Thailand, Vietnam, Cambodia, Burma (Myanmar), Malaysia, the Philippines, Indonesia, part of New Guinea and many islands of the Pacific Ocean. Just as Japan's late industrialization success and victory against the Russian Empire was seen as an example among underdeveloped Asia-Pacific nations, the Japanese took advantage of this and promoted among its conquered the goal to jointly create an anti-European \"Greater East Asia Co-Prosperity Sphere\". This plan helped the Japanese gain support from native populations during its conquests especially in Indonesia. However, the United States had a vastly stronger military and industrial base and defeated Japan, stripping it of conquests and returning its settlers back to Japan.", "title": "Imperialism by country" }, { "paragraph_id": 80, "text": "The most notable example of Dutch imperialism is regarding Indonesia.", "title": "Imperialism by country" }, { "paragraph_id": 81, "text": "The Ottoman Empire was an imperial state that lasted from 1299 to 1922. In 1453, Mehmed the Conqueror captured Constantinople and made it his capital. During the 16th and 17th centuries, in particular at the height of its power under the reign of Suleiman the Magnificent, the Ottoman Empire was a powerful multinational, multilingual empire, which invaded and colonized much of Southeast Europe, Western Asia, the Caucasus, North Africa, and the Horn of Africa. Its repeated invasions, and brutal treatment of Slavs led to the Great Migrations of the Serbs to escape persecution. At the beginning of the 17th century the empire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the empire, while others were granted various types of autonomy during the course of centuries.", "title": "Imperialism by country" }, { "paragraph_id": 82, "text": "Following a long period of military setbacks against European powers, the Ottoman Empire gradually declined, losing control of much of its territory in Europe and Africa.", "title": "Imperialism by country" }, { "paragraph_id": 83, "text": "By 1810 Egypt was effectively independent. In 1821–1829 the Greeks in the Greek War of Independence were assisted by Russia, Britain and France. In 1815 to 1914 the Ottoman Empire could exist only in the conditions of acute rivalry of the great powers, with Britain its main supporter, especially in the Crimean war 1853–1856, against Russia. After Ottoman defeat in the Russo-Turkish War (1877–1878), Bulgaria, Serbia and Montenegro gained independence and Britain took colonial control of Cyprus, while Bosnia and Herzegovina were occupied and annexed by Austro-Hungarian Empire in 1908.", "title": "Imperialism by country" }, { "paragraph_id": 84, "text": "The empire allied with Germany in World War I with the imperial ambition of recovering its lost territories, but it dissolved in the aftermath of its decisive defeat. The Kemalist national movement, supported by Soviet Russia, achieved victory in the course of the Turkish War of Independence, and the parties signed and ratified the Treaty of Lausanne in 1923 and 1924. The Republic of Turkey was established.", "title": "Imperialism by country" }, { "paragraph_id": 85, "text": "By the 18th century, the Russian Empire extended its control to the Pacific, peacefully forming a common border with the Qing Empire and Empire of Japan. This took place in a large number of military invasions of the lands east, west, and south of it. The Polish–Russian War of 1792 took place after Polish nobility from the Polish–Lithuanian Commonwealth wrote the Constitution of 3 May 1791. The war resulted in eastern Poland being conquered by Imperial Russia as a colony until 1918. The southern campaigns involved a series of Russo-Persian Wars, which began with the Persian Expedition of 1796, resulting in the acquisition of Georgia as a protectorate. Between 1800 and 1864, Imperial armies invaded south in the Russian conquest of the Caucasus, the Murid War, and the Russo-Circassian War. This last conflict led to the ethnic cleansing of Circassians from their lands. The Russian conquest of Siberia over the Khanate of Sibir took place in the 16th and 17th centuries, and resulted in the slaughter of various indigenous tribes by Russians, including the Daur, the Koryaks, the Itelmens, Mansi people and the Chukchi. The Russian colonization of Central and Eastern Europe and Siberia and treatment of the resident indigenous peoples has been compared to European colonization of the Americas, with similar negative impacts on the indigenous Siberians as upon the indigenous peoples of the Americas. The extermination of indigenous Siberian tribes was so complete that a relatively small population of only 180,000 are said to exist today. The Russian Empire exploited and suppressed Cossacks hosts during this period, before turning them into the special military estate Sosloviye in the late 18th century. Cossacks were then used in Imperial Russian campaigns against other tribes.", "title": "Imperialism by country" }, { "paragraph_id": 86, "text": "The acquisition of Ukraine by Russia commenced in 1654 with the Pereiaslav Agreement. Georgia's accession to Russia in 1783 was marked by the Treaty of Georgievsk.", "title": "Imperialism by country" }, { "paragraph_id": 87, "text": "Bolshevik leaders had effectively reestablished a polity with roughly the same extent as that empire by 1921, however with an internationalist ideology: Lenin in particular asserted the right to limited self-determination for national minorities within the new territory. Beginning in 1923, the policy of \"Indigenization\" [korenizatsiya] was intended to support non-Russians develop their national cultures within a socialist framework. Never formally revoked, it stopped being implemented after 1932. After World War II, the Soviet Union installed socialist regimes modeled on those it had installed in 1919–20 in the old Russian Empire, in areas its forces occupied in Eastern Europe. The Soviet Union and later the People's Republic of China supported revolutionary and communist movements in foreign nations and colonies to advance their own interests, but were not always successful. The USSR provided great assistance to Kuomintang in 1926–1928 in the formation of a unified Chinese government (see Northern Expedition). Although then relations with the USSR deteriorated, but the USSR was the only world power that provided military assistance to China against Japanese aggression in 1937–1941 (see Sino-Soviet Non-Aggression Pact). The victory of the Chinese Communists in the civil war of 1946–1949 relied on the great help of the USSR (see Chinese Civil War).", "title": "Imperialism by country" }, { "paragraph_id": 88, "text": "Trotsky, and others, believed that the revolution could only succeed in Russia as part of a world revolution. Lenin wrote extensively on the matter and famously declared that Imperialism was the highest stage of capitalism. However, after Lenin's death, Joseph Stalin established 'socialism in one country' for the Soviet Union, creating the model for subsequent inward looking Stalinist states and purging the early Internationalist elements. The internationalist tendencies of the early revolution would be abandoned until they returned in the framework of a client state in competition with the Americans during the Cold War. In the post-Stalin period in the late 1950s, the new political leader Nikita Khrushchev put pressure on the Soviet-American relations by starting a new wave of anti-imperialist propaganda. In his speech on the UN conference in 1960, he announced the continuation of the war on imperialism, stating that soon the people of different countries will come together and overthrow their imperialist leaders. Although the Soviet Union declared itself anti-imperialist, critics argue that it exhibited traits common to historic empires. Some scholars hold that the Soviet Union was a hybrid entity containing elements common to both multinational empires and nation-states. Some also argued that the USSR practiced colonialism as did other imperial powers and was carrying on the old Russian tradition of expansion and control. Mao Zedong once argued that the Soviet Union had itself become an imperialist power while maintaining a socialist façade. Moreover, the ideas of imperialism were widely spread in action on the higher levels of government. Josip Broz Tito and Milovan Djilas have referred to the Stalinist USSR's foreign policies, such as the occupation and economic exploitations of Eastern Europe and its aggressive and hostile policy towards Yugoslavia as Soviet imperialism. Some Marxists within the Russian Empire and later the USSR, like Sultan Galiev and Vasyl Shakhrai, considered the Soviet regime a renewed version of the Russian imperialism and colonialism. The crushing of the Hungarian Revolution of 1956 and Soviet–Afghan War have been cited as examples.", "title": "Imperialism by country" }, { "paragraph_id": 89, "text": "Made up of former colonies itself, the early United States expressed its opposition to Imperialism, at least in a form distinct from its own Manifest Destiny, through policies such as the Monroe Doctrine. However the US may have unsuccessfully attempted to capture Canada in the War of 1812. The United States achieved very significant territorial concessions from Mexico during the Mexican–American War. Beginning in the late 19th and early 20th century, policies such as Theodore Roosevelt’s interventionism in Central America and Woodrow Wilson’s mission to \"make the world safe for democracy\" changed all this. They were often backed by military force, but were more often affected from behind the scenes. This is consistent with the general notion of hegemony and imperium of historical empires. In 1898, Americans who opposed imperialism created the Anti-Imperialist League to oppose the US annexation of the Philippines and Cuba. One year later, a war erupted in the Philippines causing business, labor and government leaders in the US to condemn America's occupation in the Philippines as they also denounced them for causing the deaths of many Filipinos. American foreign policy was denounced as a \"racket\" by Smedley Butler, a former American general who had become a spokesman for the far left.", "title": "Imperialism by country" }, { "paragraph_id": 90, "text": "At the start of World War II, President Franklin D. Roosevelt was opposed to European colonialism, especially in India. He pulled back when Britain's Winston Churchill demanded that victory in the war be the first priority. Roosevelt expected that the United Nations would take up the problem of decolonization.", "title": "Imperialism by country" }, { "paragraph_id": 91, "text": "Some have described the internal strife between various people groups as a form of imperialism or colonialism. This internal form is distinct from informal U.S. imperialism in the form of political and financial hegemony. It also showed difference in the United States' formation of \"colonies\" abroad. Through the treatment of its indigenous peoples during westward expansion, the United States took on the form of an imperial power prior to any attempts at external imperialism. This internal form of empire has been referred to as \"internal colonialism\". Participation in the African slave trade and the subsequent treatment of its 12 to 15 million Africans is viewed by some to be a more modern extension of America's \"internal colonialism\". However, this internal colonialism faced resistance, as external colonialism did, but the anti-colonial presence was far less prominent due to the nearly complete dominance that the United States was able to assert over both indigenous peoples and African-Americans. In a lecture on April 16, 2003, Edward Said described modern imperialism in the United States as an aggressive means of attack towards the contemporary Orient stating that \"due to their backward living, lack of democracy and the violation of women’s rights. The western world forgets during this process of converting the other that enlightenment and democracy are concepts that not all will agree upon\".", "title": "Imperialism by country" }, { "paragraph_id": 92, "text": "Spanish imperialism in the colonial era corresponds with the rise and decline of the Spanish Empire, conventionally recognized as emerging in 1402 with the conquest of the Canary Islands. Following the successes of exploratory maritime voyages conducted during the Age of Discovery, Spain committed considerable financial and military resources towards developing a robust navy capable of conducting large-scale, transatlantic expeditionary operations in order to establish and solidify a firm imperial presence across large portions of North America, South America, and the geographic regions comprising the Caribbean basin. Concomitant with Spanish endorsement and sponsorship of transatlantic expeditionary voyages was the deployment of Conquistadors, which further expanded Spanish imperial boundaries through the acquisition and development of territories and colonies.", "title": "Imperialism by country" }, { "paragraph_id": 93, "text": "In congruence with the colonialist activities of competing European imperial powers throughout the 15th – 19th centuries, the Spanish were equally engrossed in extending geopolitical power. The Caribbean basin functioned as a key geographic focal point for advancing Spanish imperialism. Similar to the strategic prioritization Spain placed towards achieving victory in the conquests of the Aztec Empire and Inca Empire, Spain placed equal strategic emphasis on expanding the nation's imperial footprint within the Caribbean basin.", "title": "Imperialism by country" }, { "paragraph_id": 94, "text": "Echoing the prevailing ideological perspectives regarding colonialism and imperialism embraced by Spain's European rivals during the colonial era, including the English, French, and the Dutch, the Spanish used colonialism as a means of expanding imperial geopolitical borders and securing the defense of maritime trade routes in the Caribbean basin.", "title": "Imperialism by country" }, { "paragraph_id": 95, "text": "While leveraging colonialism in the same geographic operating theater as its imperial rivals, Spain maintained distinct imperial objectives and instituted a unique form of colonialism in support of its imperial agenda. Spain placed significant strategic emphasis on the acquisition, extraction, and exportation of precious metals (primarily gold and silver). A second objective was the evangelization of subjugated indigenous populations residing in mineral-rich and strategically favorable locations. Notable examples of these indigenous groups include the Taίno populations inhabiting Puerto Rico and segments of Cuba. Compulsory labor and slavery were widely institutionalized across Spanish-occupied territories and colonies, with an initial emphasis on directing labor towards mining activity and related methods of procuring semi-precious metals. The emergence of the Encomienda system during the 16th–17th centuries in occupied colonies within the Caribbean basin reflects a gradual shift in imperial prioritization, increasingly focusing on large-scale production and exportation of agricultural commodities.", "title": "Imperialism by country" }, { "paragraph_id": 96, "text": "The scope and scale of Spanish participation in imperialism within the Caribbean basin remains a subject of scholarly debate among historians. A fundamental source of contention stems from the inadvertent conflation of theoretical conceptions of imperialism and colonialism. Furthermore, significant variation exists in the definition and interpretation of these terms as expounded by historians, anthropologists, philosophers, and political scientists.", "title": "Imperialism by country" }, { "paragraph_id": 97, "text": "Among historians, there is substantial support in favor of approaching imperialism as a conceptual theory emerging during the 18th–19th centuries, particularly within Britain, propagated by key exponents such as Joseph Chamberlain and Benjamin Disraeli. In accordance with this theoretical perspective, the activities of the Spanish in the Caribbean are not components of a preeminent, ideologically-driven form of imperialism. Rather, these activities are more accurately classified as representing a form of colonialism.", "title": "Imperialism by country" }, { "paragraph_id": 98, "text": "Further divergence among historians can be attributed to varying theoretical perspectives regarding imperialism that are proposed by emerging academic schools of thought. Noteworthy examples include cultural imperialism, whereby proponents such as John Downing and Annabelle Sreberny-Modammadi define imperialism as \"...the conquest and control of one country by a more powerful one.\" Cultural imperialism signifies the dimensions of the process that go beyond economic exploitation or military force.\" Moreover, colonialism is understood as \"...the form of imperialism in which the government of the colony is run directly by foreigners.\"", "title": "Imperialism by country" }, { "paragraph_id": 99, "text": "In spite of diverging perspectives and the absence of a unilateral scholarly consensus regarding imperialism among historians, within the context of Spanish expansion in the Caribbean basin during the colonial era, imperialism can be interpreted as an overarching ideological agenda that is perpetuated through the institution of colonialism. In this context, colonialism functions as an instrument designed to achieve specific imperialist objectives.", "title": "Imperialism by country" }, { "paragraph_id": 100, "text": "Primary sources", "title": "Further reading" } ]
Imperialism is the practice, theory or attitude of maintaining or extending power over foreign nations, particularly through expansionism, employing both hard power and soft power. Imperialism focuses on establishing or maintaining hegemony and a more or less formal empire. While related to the concepts of colonialism, imperialism is a distinct concept that can apply to other forms of expansion and many forms of government.
2001-12-04T14:30:14Z
2023-12-24T20:40:11Z
[ "Template:Wiktionary", "Template:Dubious", "Template:Cite web", "Template:Politics", "Template:Authoritarian types of rule", "Template:Political ideologies", "Template:International relations", "Template:Rp", "Template:Page needed", "Template:Fact", "Template:Reflist", "Template:Wikiquote", "Template:Which", "Template:Div col end", "Template:ISBN", "Template:Legend", "Template:Div col", "Template:Citation", "Template:Cbignore", "Template:See also", "Template:Expand section", "Template:Cite magazine", "Template:Cite news", "Template:Empires", "Template:Short description", "Template:Other uses", "Template:Use dmy dates", "Template:Cite encyclopedia", "Template:Commons category", "Template:Refend", "Template:Redirect", "Template:Citation needed", "Template:Further", "Template:Harvard citation no brackets", "Template:Refbegin", "Template:Authority control", "Template:Main", "Template:Blockquote", "Template:Lang-fr", "Template:Cite book", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Imperialism
15,317
Internet Protocol version 4
Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor. IPv4 uses a 32-bit address space which provides 4,294,967,296 (2) unique addresses, but large blocks are reserved for special networking purposes. Internet Protocol version 4 is described in IETF publication RFC 791 (September 1981), replacing an earlier definition of January 1980 (RFC 760). In March 1982, the US Department of Defense decided on the Internet Protocol Suite (TCP/IP) as the standard for all military computer networking. The Internet Protocol is the protocol that defines and enables internetworking at the internet layer of the Internet Protocol Suite. In essence it forms the Internet. It uses a logical addressing system and performs routing, which is the forwarding of packets from a source host to the next router that is one hop closer to the intended destination host on another network. IPv4 is a connectionless protocol, and operates on a best-effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP). IPv4 uses 32-bit addresses which limits the address space to 4294967296 (2) addresses. IPv4 reserves special address blocks for private networks (~18 million addresses) and multicast addresses (~270 million addresses). IPv4 addresses may be represented in any notation expressing a 32-bit integer value. They are most often written in dot-decimal notation, which consists of four octets of the address expressed individually in decimal numbers and separated by periods. For example, the quad-dotted IP address in the illustration (172.16.254.1) represents the 32-bit decimal number 2886794753, which in hexadecimal format is 0xAC10FE01. CIDR notation combines the address with its routing prefix in a compact format, in which the address is followed by a slash character (/) and the count of leading consecutive 1 bits in the routing prefix (subnet mask). Other address representations were in common use when classful networking was practiced. For example, the loopback address 127.0.0.1 was commonly written as 127.1, given that it belongs to a class-A network with eight bits for the network mask and 24 bits for the host number. When fewer than four numbers were specified in the address in dotted notation, the last value was treated as an integer of as many bytes as are required to fill out the address to four octets. Thus, the address 127.65530 is equivalent to 127.0.255.250. In the original design of IPv4, an IP address was divided into two parts: the network identifier was the most significant octet of the address, and the host identifier was the rest of the address. The latter was also called the rest field. This structure permitted a maximum of 256 network identifiers, which was quickly found to be inadequate. To overcome this limit, the most-significant address octet was redefined in 1981 to create network classes, in a system which later became known as classful networking. The revised system defined five classes. Classes A, B, and C had different bit lengths for network identification. The rest of the address was used as previously to identify a host within a network. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. In addition to the three classes for addressing hosts, Class D was defined for multicast addressing and Class E was reserved for future applications. Dividing existing classful networks into subnets began in 1985 with the publication of RFC 950. This division was made more flexible with the introduction of variable-length subnet masks (VLSM) in RFC 1109 in 1987. In 1993, based on this work, RFC 1517 introduced Classless Inter-Domain Routing (CIDR), which expressed the number of bits (from the most significant) as, for instance, /24, and the class-based scheme was dubbed classful, by contrast. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. The hierarchical structure created by CIDR is managed by the Internet Assigned Numbers Authority (IANA) and the regional Internet registries (RIRs). Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments. The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. Notably these addresses are used for multicast traffic and to provide addressing space for unrestricted uses on private networks. Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose. Since two private networks, e.g., two branch offices, cannot directly interoperate via the public Internet, the two networks must be bridged across the Internet via a virtual private network (VPN) or an IP tunnel, which encapsulates packets, including their headers containing the private addresses, in a protocol layer during transmission across the public network. Additionally, encapsulated packets may be encrypted for transmission across public networks to secure the data. RFC 3927 defines the special address block 169.254.0.0/16 for link-local addressing. These addresses are only valid on the link (such as a local network segment or point-to-point connection) directly connected to a host that uses them. These addresses are not routable. Like private addresses, these addresses cannot be the source or destination of packets traversing the internet. These addresses are primarily used for address autoconfiguration (Zeroconf) when a host cannot obtain an IP address from a DHCP server or other internal configuration methods. When the address block was reserved, no standards existed for address autoconfiguration. Microsoft created an implementation called Automatic Private IP Addressing (APIPA), which was deployed on millions of machines and became a de facto standard. Many years later, in May 2005, the IETF defined a formal standard in RFC 3927, entitled Dynamic Configuration of IPv4 Link-Local Addresses. The class A network 127.0.0.0 (classless network 127.0.0.0/8) is reserved for loopback. IP packets whose source addresses belong to this network should never appear outside a host. Packets received on a non-loopback interface with a loopback source or destination address must be dropped. The first address in a subnet is used to identify the subnet itself. In this address all host bits are 0. To avoid ambiguity in representation, this address is reserved. The last address has all host bits set to 1. It is used as a local broadcast address for sending messages to all devices on the subnet simultaneously. For networks of size /24 or larger, the broadcast address always ends in 255. For example, in the subnet 192.168.5.0/24 (subnet mask 255.255.255.0) the identifier 192.168.5.0 is used to refer to the entire subnet. The broadcast address of the network is 192.168.5.255. However, this does not mean that every address ending in 0 or 255 cannot be used as a host address. For example, in the /16 subnet 192.168.0.0/255.255.0.0, which is equivalent to the address range 192.168.0.0–192.168.255.255, the broadcast address is 192.168.255.255. One can use the following addresses for hosts, even though they end with 255: 192.168.1.255, 192.168.2.255, etc. Also, 192.168.0.0 is the network identifier and must not be assigned to an interface. The addresses 192.168.1.0, 192.168.2.0, etc., may be assigned, despite ending with 0. In the past, conflict between network addresses and broadcast addresses arose because some software used non-standard broadcast addresses with zeros instead of ones. In networks smaller than /24, broadcast addresses do not necessarily end with 255. For example, a CIDR subnet 203.0.113.16/28 has the broadcast address 203.0.113.31. As a special case, a /31 network has capacity for just two hosts. These networks are typically used for point-to-point connections. There is no network identifier or broadcast address for these networks. Hosts on the Internet are usually known by names, e.g., www.example.com, not primarily by their IP address, which is used for routing and network interface identification. The use of domain names requires translating, called resolving, them to addresses and vice versa. This is analogous to looking up a phone number in a phone book using the recipient's name. The translation between addresses and domain names is performed by the Domain Name System (DNS), a hierarchical, distributed naming system that allows for the subdelegation of namespaces to other DNS servers. A unnumbered point-to-point (PtP) link, also called a transit link, is a link that does not have an IP network or subnet number associated with it, but still has an IP address. First introduced in 1993, Phil Karn from Qualcomm is credited as the original designer. The purpose of a transit link is to route datagrams. They are used to free IP addresses from a scarce IP address space or to reduce the management of assigning IP and configuration of interfaces. Previously, every link needed to dedicate a /31 or /30 subnet using 2 or 4 IP addresses per point-to-point link. When a link is unnumbered, a router-id is used, a single IP address borrowed from a defined (normally a loopback) interface. The same router-id can be used on multiple interfaces. One of the disadvantages of unnumbered interfaces is that it is harder to do remote testing and management. In the 1980s, it became apparent that the pool of available IPv4 addresses was depleting at a rate that was not initially anticipated in the original design of the network. The main market forces that accelerated address depletion included the rapidly growing number of Internet users, who increasingly used mobile computing devices, such as laptop computers, personal digital assistants (PDAs), and smart phones with IP data services. In addition, high-speed Internet access was based on always-on devices. The threat of exhaustion motivated the introduction of a number of remedial technologies, such as: By the mid-1990s, NAT was used pervasively in network access provider systems, along with strict usage-based allocation policies at the regional and local Internet registries. The primary address pool of the Internet, maintained by IANA, was exhausted on 3 February 2011, when the last five blocks were allocated to the five RIRs. APNIC was the first RIR to exhaust its regional pool on 15 April 2011, except for a small amount of address space reserved for the transition technologies to IPv6, which is to be allocated under a restricted policy. The long-term solution to address exhaustion was the 1998 specification of a new version of the Internet Protocol, IPv6. It provides a vastly increased address space, but also allows improved route aggregation across the Internet, and offers large subnetwork allocations of a minimum of 2 host addresses to end users. However, IPv4 is not directly interoperable with IPv6, so that IPv4-only hosts cannot directly communicate with IPv6-only hosts. With the phase-out of the 6bone experimental network starting in 2004, permanent formal deployment of IPv6 commenced in 2006. Completion of IPv6 deployment is expected to take considerable time, so that intermediate transition technologies are necessary to permit hosts to participate in the Internet using both versions of the protocol. An IP packet consists of a header section and a data section. An IP packet has no data checksum or any other footer after the data section. Typically the link layer encapsulates IP packets in frames with a CRC footer that detects most errors, many transport-layer protocols carried by IP also have their own error checking. The IPv4 packet header consists of 14 fields, of which 13 are required. The 14th field is optional and aptly named: options. The fields in the header are packed with the most significant byte first (network byte order), and for the diagram and discussion, the most significant bits are considered to come first (MSB 0 bit numbering). The most significant bit is numbered 0, so the version field is actually found in the four most significant bits of the first byte, for example. The packet payload is not included in the checksum. Its contents are interpreted based on the value of the Protocol header field. List of IP protocol numbers contains a complete list of payload protocol types. Some of the common payload protocols include: The Internet Protocol enables traffic between networks. The design accommodates networks of diverse physical nature; it is independent of the underlying transmission technology used in the link layer. Networks with different hardware usually vary not only in transmission speed, but also in the maximum transmission unit (MTU). When one network wants to transmit datagrams to a network with a smaller MTU, it may fragment its datagrams. In IPv4, this function was placed at the Internet Layer and is performed in IPv4 routers limiting exposure to these issues by hosts. In contrast, IPv6, the next generation of the Internet Protocol, does not allow routers to perform fragmentation; hosts must perform Path MTU Discovery before sending datagrams. When a router receives a packet, it examines the destination address and determines the outgoing interface to use and that interface's MTU. If the packet size is bigger than the MTU, and the Do not Fragment (DF) bit in the packet's header is set to 0, then the router may fragment the packet. The router divides the packet into fragments. The maximum size of each fragment is the outgoing MTU minus the IP header size (20 bytes minimum; 60 bytes maximum). The router puts each fragment into its own packet, each fragment packet having the following changes: For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment offsets would be multiples of 1,500 − 20 8 = 185 {\displaystyle {\frac {1{,}500-20}{8}}=185} (0, 185, 370, 555, 740, etc.). It is possible that a packet is fragmented at one router, and that the fragments are further fragmented at another router. For example, a packet of 4,520 bytes, including a 20 bytes IP header is fragmented to two packets on a link with an MTU of 2,500 bytes: The total data size is preserved: 2,480 bytes + 2,020 bytes = 4,500 bytes. The offsets are 0 {\displaystyle 0} and 0 + 2,480 8 = 310 {\displaystyle {\frac {0+2{,}480}{8}}=310} . When forwarded to a link with an MTU of 1,500 bytes, each fragment is fragmented into two fragments: Again, the data size is preserved: 1,480 + 1,000 = 2,480, and 1,480 + 540 = 2,020. Also in this case, the More Fragments bit remains 1 for all the fragments that came with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to 0 only in the last one. And of course, the Identification field continues to have the same value in all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows they have initially all started from the same packet. The last offset and last data size are used to calculate the total data size: 495 × 8 + 540 = 3,960 + 540 = 4,500 {\displaystyle 495\times 8+540=3{,}960+540=4{,}500} . A receiver knows that a packet is a fragment, if at least one of the following conditions is true: The receiver identifies matching fragments using the source and destination addresses, the protocol ID, and the identification field. The receiver reassembles the data from fragments with the same ID using both the fragment offset and the more fragments flag. When the receiver receives the last fragment, which has the more fragments flag set to 0, it can calculate the size of the original data payload, by multiplying the last fragment's offset by eight and adding the last fragment's data size. In the given example, this calculation was 495 × 8 + 540 = 4,500 {\displaystyle 495\times 8+540=4{,}500} bytes. When the receiver has all fragments, they can be reassembled in the correct sequence according to the offsets to form the original datagram. IP addresses are not tied in any permanent manner to networking hardware and, indeed, in modern operating systems, a network interface can have multiple IP addresses. In order to properly deliver an IP packet to the destination host on a link, hosts and routers need additional mechanisms to make an association between the hardware address of network interfaces and IP addresses. The Address Resolution Protocol (ARP) performs this IP-address-to-hardware-address translation for IPv4. In addition, the reverse correlation is often necessary. For example, unless an address is preconfigured by an administrator, when an IP host is booted or connected to a network it needs to determine its IP address. Protocols for such reverse correlations include Dynamic Host Configuration Protocol (DHCP), Bootstrap Protocol (BOOTP) and, infrequently, reverse ARP. This article was adapted from the following source under a CC BY 4.0 license (2022) : Michel Bakni; Sandra Hanbo (9 December 2022). "A Survey on Internet Protocol version 4 (IPv4)" (PDF). WikiJournal of Science. doi:10.15347/WJS/2022.002. ISSN 2470-6345. OCLC 9708517136. S2CID 254665961. Wikidata Q104661268.
[ { "paragraph_id": 0, "text": "Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor.", "title": "" }, { "paragraph_id": 1, "text": "IPv4 uses a 32-bit address space which provides 4,294,967,296 (2) unique addresses, but large blocks are reserved for special networking purposes.", "title": "" }, { "paragraph_id": 2, "text": "Internet Protocol version 4 is described in IETF publication RFC 791 (September 1981), replacing an earlier definition of January 1980 (RFC 760). In March 1982, the US Department of Defense decided on the Internet Protocol Suite (TCP/IP) as the standard for all military computer networking.", "title": "History" }, { "paragraph_id": 3, "text": "The Internet Protocol is the protocol that defines and enables internetworking at the internet layer of the Internet Protocol Suite. In essence it forms the Internet. It uses a logical addressing system and performs routing, which is the forwarding of packets from a source host to the next router that is one hop closer to the intended destination host on another network.", "title": "Purpose" }, { "paragraph_id": 4, "text": "IPv4 is a connectionless protocol, and operates on a best-effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP).", "title": "Purpose" }, { "paragraph_id": 5, "text": "IPv4 uses 32-bit addresses which limits the address space to 4294967296 (2) addresses.", "title": "Addressing" }, { "paragraph_id": 6, "text": "IPv4 reserves special address blocks for private networks (~18 million addresses) and multicast addresses (~270 million addresses).", "title": "Addressing" }, { "paragraph_id": 7, "text": "IPv4 addresses may be represented in any notation expressing a 32-bit integer value. They are most often written in dot-decimal notation, which consists of four octets of the address expressed individually in decimal numbers and separated by periods.", "title": "Addressing" }, { "paragraph_id": 8, "text": "For example, the quad-dotted IP address in the illustration (172.16.254.1) represents the 32-bit decimal number 2886794753, which in hexadecimal format is 0xAC10FE01.", "title": "Addressing" }, { "paragraph_id": 9, "text": "CIDR notation combines the address with its routing prefix in a compact format, in which the address is followed by a slash character (/) and the count of leading consecutive 1 bits in the routing prefix (subnet mask).", "title": "Addressing" }, { "paragraph_id": 10, "text": "Other address representations were in common use when classful networking was practiced. For example, the loopback address 127.0.0.1 was commonly written as 127.1, given that it belongs to a class-A network with eight bits for the network mask and 24 bits for the host number. When fewer than four numbers were specified in the address in dotted notation, the last value was treated as an integer of as many bytes as are required to fill out the address to four octets. Thus, the address 127.65530 is equivalent to 127.0.255.250.", "title": "Addressing" }, { "paragraph_id": 11, "text": "In the original design of IPv4, an IP address was divided into two parts: the network identifier was the most significant octet of the address, and the host identifier was the rest of the address. The latter was also called the rest field. This structure permitted a maximum of 256 network identifiers, which was quickly found to be inadequate.", "title": "Addressing" }, { "paragraph_id": 12, "text": "To overcome this limit, the most-significant address octet was redefined in 1981 to create network classes, in a system which later became known as classful networking. The revised system defined five classes. Classes A, B, and C had different bit lengths for network identification. The rest of the address was used as previously to identify a host within a network. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. In addition to the three classes for addressing hosts, Class D was defined for multicast addressing and Class E was reserved for future applications.", "title": "Addressing" }, { "paragraph_id": 13, "text": "Dividing existing classful networks into subnets began in 1985 with the publication of RFC 950. This division was made more flexible with the introduction of variable-length subnet masks (VLSM) in RFC 1109 in 1987. In 1993, based on this work, RFC 1517 introduced Classless Inter-Domain Routing (CIDR), which expressed the number of bits (from the most significant) as, for instance, /24, and the class-based scheme was dubbed classful, by contrast. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. The hierarchical structure created by CIDR is managed by the Internet Assigned Numbers Authority (IANA) and the regional Internet registries (RIRs). Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments.", "title": "Addressing" }, { "paragraph_id": 14, "text": "The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. Notably these addresses are used for multicast traffic and to provide addressing space for unrestricted uses on private networks.", "title": "Addressing" }, { "paragraph_id": 15, "text": "Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose.", "title": "Addressing" }, { "paragraph_id": 16, "text": "", "title": "Addressing" }, { "paragraph_id": 17, "text": "Since two private networks, e.g., two branch offices, cannot directly interoperate via the public Internet, the two networks must be bridged across the Internet via a virtual private network (VPN) or an IP tunnel, which encapsulates packets, including their headers containing the private addresses, in a protocol layer during transmission across the public network. Additionally, encapsulated packets may be encrypted for transmission across public networks to secure the data.", "title": "Addressing" }, { "paragraph_id": 18, "text": "RFC 3927 defines the special address block 169.254.0.0/16 for link-local addressing. These addresses are only valid on the link (such as a local network segment or point-to-point connection) directly connected to a host that uses them. These addresses are not routable. Like private addresses, these addresses cannot be the source or destination of packets traversing the internet. These addresses are primarily used for address autoconfiguration (Zeroconf) when a host cannot obtain an IP address from a DHCP server or other internal configuration methods.", "title": "Addressing" }, { "paragraph_id": 19, "text": "When the address block was reserved, no standards existed for address autoconfiguration. Microsoft created an implementation called Automatic Private IP Addressing (APIPA), which was deployed on millions of machines and became a de facto standard. Many years later, in May 2005, the IETF defined a formal standard in RFC 3927, entitled Dynamic Configuration of IPv4 Link-Local Addresses.", "title": "Addressing" }, { "paragraph_id": 20, "text": "The class A network 127.0.0.0 (classless network 127.0.0.0/8) is reserved for loopback. IP packets whose source addresses belong to this network should never appear outside a host. Packets received on a non-loopback interface with a loopback source or destination address must be dropped.", "title": "Addressing" }, { "paragraph_id": 21, "text": "The first address in a subnet is used to identify the subnet itself. In this address all host bits are 0. To avoid ambiguity in representation, this address is reserved. The last address has all host bits set to 1. It is used as a local broadcast address for sending messages to all devices on the subnet simultaneously. For networks of size /24 or larger, the broadcast address always ends in 255.", "title": "Addressing" }, { "paragraph_id": 22, "text": "For example, in the subnet 192.168.5.0/24 (subnet mask 255.255.255.0) the identifier 192.168.5.0 is used to refer to the entire subnet. The broadcast address of the network is 192.168.5.255.", "title": "Addressing" }, { "paragraph_id": 23, "text": "However, this does not mean that every address ending in 0 or 255 cannot be used as a host address. For example, in the /16 subnet 192.168.0.0/255.255.0.0, which is equivalent to the address range 192.168.0.0–192.168.255.255, the broadcast address is 192.168.255.255. One can use the following addresses for hosts, even though they end with 255: 192.168.1.255, 192.168.2.255, etc. Also, 192.168.0.0 is the network identifier and must not be assigned to an interface. The addresses 192.168.1.0, 192.168.2.0, etc., may be assigned, despite ending with 0.", "title": "Addressing" }, { "paragraph_id": 24, "text": "In the past, conflict between network addresses and broadcast addresses arose because some software used non-standard broadcast addresses with zeros instead of ones.", "title": "Addressing" }, { "paragraph_id": 25, "text": "In networks smaller than /24, broadcast addresses do not necessarily end with 255. For example, a CIDR subnet 203.0.113.16/28 has the broadcast address 203.0.113.31.", "title": "Addressing" }, { "paragraph_id": 26, "text": "As a special case, a /31 network has capacity for just two hosts. These networks are typically used for point-to-point connections. There is no network identifier or broadcast address for these networks.", "title": "Addressing" }, { "paragraph_id": 27, "text": "Hosts on the Internet are usually known by names, e.g., www.example.com, not primarily by their IP address, which is used for routing and network interface identification. The use of domain names requires translating, called resolving, them to addresses and vice versa. This is analogous to looking up a phone number in a phone book using the recipient's name.", "title": "Addressing" }, { "paragraph_id": 28, "text": "The translation between addresses and domain names is performed by the Domain Name System (DNS), a hierarchical, distributed naming system that allows for the subdelegation of namespaces to other DNS servers.", "title": "Addressing" }, { "paragraph_id": 29, "text": "A unnumbered point-to-point (PtP) link, also called a transit link, is a link that does not have an IP network or subnet number associated with it, but still has an IP address. First introduced in 1993, Phil Karn from Qualcomm is credited as the original designer.", "title": "Addressing" }, { "paragraph_id": 30, "text": "The purpose of a transit link is to route datagrams. They are used to free IP addresses from a scarce IP address space or to reduce the management of assigning IP and configuration of interfaces. Previously, every link needed to dedicate a /31 or /30 subnet using 2 or 4 IP addresses per point-to-point link. When a link is unnumbered, a router-id is used, a single IP address borrowed from a defined (normally a loopback) interface. The same router-id can be used on multiple interfaces.", "title": "Addressing" }, { "paragraph_id": 31, "text": "One of the disadvantages of unnumbered interfaces is that it is harder to do remote testing and management.", "title": "Addressing" }, { "paragraph_id": 32, "text": "In the 1980s, it became apparent that the pool of available IPv4 addresses was depleting at a rate that was not initially anticipated in the original design of the network. The main market forces that accelerated address depletion included the rapidly growing number of Internet users, who increasingly used mobile computing devices, such as laptop computers, personal digital assistants (PDAs), and smart phones with IP data services. In addition, high-speed Internet access was based on always-on devices. The threat of exhaustion motivated the introduction of a number of remedial technologies, such as:", "title": "Address space exhaustion" }, { "paragraph_id": 33, "text": "By the mid-1990s, NAT was used pervasively in network access provider systems, along with strict usage-based allocation policies at the regional and local Internet registries.", "title": "Address space exhaustion" }, { "paragraph_id": 34, "text": "The primary address pool of the Internet, maintained by IANA, was exhausted on 3 February 2011, when the last five blocks were allocated to the five RIRs. APNIC was the first RIR to exhaust its regional pool on 15 April 2011, except for a small amount of address space reserved for the transition technologies to IPv6, which is to be allocated under a restricted policy.", "title": "Address space exhaustion" }, { "paragraph_id": 35, "text": "The long-term solution to address exhaustion was the 1998 specification of a new version of the Internet Protocol, IPv6. It provides a vastly increased address space, but also allows improved route aggregation across the Internet, and offers large subnetwork allocations of a minimum of 2 host addresses to end users. However, IPv4 is not directly interoperable with IPv6, so that IPv4-only hosts cannot directly communicate with IPv6-only hosts. With the phase-out of the 6bone experimental network starting in 2004, permanent formal deployment of IPv6 commenced in 2006. Completion of IPv6 deployment is expected to take considerable time, so that intermediate transition technologies are necessary to permit hosts to participate in the Internet using both versions of the protocol.", "title": "Address space exhaustion" }, { "paragraph_id": 36, "text": "An IP packet consists of a header section and a data section. An IP packet has no data checksum or any other footer after the data section. Typically the link layer encapsulates IP packets in frames with a CRC footer that detects most errors, many transport-layer protocols carried by IP also have their own error checking.", "title": "Packet structure" }, { "paragraph_id": 37, "text": "The IPv4 packet header consists of 14 fields, of which 13 are required. The 14th field is optional and aptly named: options. The fields in the header are packed with the most significant byte first (network byte order), and for the diagram and discussion, the most significant bits are considered to come first (MSB 0 bit numbering). The most significant bit is numbered 0, so the version field is actually found in the four most significant bits of the first byte, for example.", "title": "Packet structure" }, { "paragraph_id": 38, "text": "The packet payload is not included in the checksum. Its contents are interpreted based on the value of the Protocol header field.", "title": "Packet structure" }, { "paragraph_id": 39, "text": "List of IP protocol numbers contains a complete list of payload protocol types. Some of the common payload protocols include:", "title": "Packet structure" }, { "paragraph_id": 40, "text": "The Internet Protocol enables traffic between networks. The design accommodates networks of diverse physical nature; it is independent of the underlying transmission technology used in the link layer. Networks with different hardware usually vary not only in transmission speed, but also in the maximum transmission unit (MTU). When one network wants to transmit datagrams to a network with a smaller MTU, it may fragment its datagrams. In IPv4, this function was placed at the Internet Layer and is performed in IPv4 routers limiting exposure to these issues by hosts.", "title": "Fragmentation and reassembly" }, { "paragraph_id": 41, "text": "In contrast, IPv6, the next generation of the Internet Protocol, does not allow routers to perform fragmentation; hosts must perform Path MTU Discovery before sending datagrams.", "title": "Fragmentation and reassembly" }, { "paragraph_id": 42, "text": "When a router receives a packet, it examines the destination address and determines the outgoing interface to use and that interface's MTU. If the packet size is bigger than the MTU, and the Do not Fragment (DF) bit in the packet's header is set to 0, then the router may fragment the packet.", "title": "Fragmentation and reassembly" }, { "paragraph_id": 43, "text": "The router divides the packet into fragments. The maximum size of each fragment is the outgoing MTU minus the IP header size (20 bytes minimum; 60 bytes maximum). The router puts each fragment into its own packet, each fragment packet having the following changes:", "title": "Fragmentation and reassembly" }, { "paragraph_id": 44, "text": "For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment offsets would be multiples of 1,500 − 20 8 = 185 {\\displaystyle {\\frac {1{,}500-20}{8}}=185} (0, 185, 370, 555, 740, etc.).", "title": "Fragmentation and reassembly" }, { "paragraph_id": 45, "text": "It is possible that a packet is fragmented at one router, and that the fragments are further fragmented at another router. For example, a packet of 4,520 bytes, including a 20 bytes IP header is fragmented to two packets on a link with an MTU of 2,500 bytes:", "title": "Fragmentation and reassembly" }, { "paragraph_id": 46, "text": "The total data size is preserved: 2,480 bytes + 2,020 bytes = 4,500 bytes. The offsets are 0 {\\displaystyle 0} and 0 + 2,480 8 = 310 {\\displaystyle {\\frac {0+2{,}480}{8}}=310} .", "title": "Fragmentation and reassembly" }, { "paragraph_id": 47, "text": "When forwarded to a link with an MTU of 1,500 bytes, each fragment is fragmented into two fragments:", "title": "Fragmentation and reassembly" }, { "paragraph_id": 48, "text": "Again, the data size is preserved: 1,480 + 1,000 = 2,480, and 1,480 + 540 = 2,020.", "title": "Fragmentation and reassembly" }, { "paragraph_id": 49, "text": "Also in this case, the More Fragments bit remains 1 for all the fragments that came with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to 0 only in the last one. And of course, the Identification field continues to have the same value in all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows they have initially all started from the same packet.", "title": "Fragmentation and reassembly" }, { "paragraph_id": 50, "text": "The last offset and last data size are used to calculate the total data size: 495 × 8 + 540 = 3,960 + 540 = 4,500 {\\displaystyle 495\\times 8+540=3{,}960+540=4{,}500} .", "title": "Fragmentation and reassembly" }, { "paragraph_id": 51, "text": "A receiver knows that a packet is a fragment, if at least one of the following conditions is true:", "title": "Fragmentation and reassembly" }, { "paragraph_id": 52, "text": "The receiver identifies matching fragments using the source and destination addresses, the protocol ID, and the identification field. The receiver reassembles the data from fragments with the same ID using both the fragment offset and the more fragments flag. When the receiver receives the last fragment, which has the more fragments flag set to 0, it can calculate the size of the original data payload, by multiplying the last fragment's offset by eight and adding the last fragment's data size. In the given example, this calculation was 495 × 8 + 540 = 4,500 {\\displaystyle 495\\times 8+540=4{,}500} bytes. When the receiver has all fragments, they can be reassembled in the correct sequence according to the offsets to form the original datagram.", "title": "Fragmentation and reassembly" }, { "paragraph_id": 53, "text": "IP addresses are not tied in any permanent manner to networking hardware and, indeed, in modern operating systems, a network interface can have multiple IP addresses. In order to properly deliver an IP packet to the destination host on a link, hosts and routers need additional mechanisms to make an association between the hardware address of network interfaces and IP addresses. The Address Resolution Protocol (ARP) performs this IP-address-to-hardware-address translation for IPv4. In addition, the reverse correlation is often necessary. For example, unless an address is preconfigured by an administrator, when an IP host is booted or connected to a network it needs to determine its IP address. Protocols for such reverse correlations include Dynamic Host Configuration Protocol (DHCP), Bootstrap Protocol (BOOTP) and, infrequently, reverse ARP.", "title": "Assistive protocols" }, { "paragraph_id": 54, "text": "This article was adapted from the following source under a CC BY 4.0 license (2022) : Michel Bakni; Sandra Hanbo (9 December 2022). \"A Survey on Internet Protocol version 4 (IPv4)\" (PDF). WikiJournal of Science. doi:10.15347/WJS/2022.002. ISSN 2470-6345. OCLC 9708517136. S2CID 254665961. Wikidata Q104661268.", "title": "References" } ]
Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production on SATNET in 1982 and on the ARPANET in January 1983. It is still used to route most Internet traffic today, even with the ongoing deployment of Internet Protocol version 6 (IPv6), its successor. IPv4 uses a 32-bit address space which provides 4,294,967,296 (232) unique addresses, but large blocks are reserved for special networking purposes.
2001-12-04T16:18:00Z
2023-12-24T07:38:21Z
[ "Template:Notelist", "Template:IPstack", "Template:Ref RFC", "Template:Gaps", "Template:See also", "Template:Sister project links", "Template:Authority control", "Template:Short description", "Template:Broader", "Template:Main", "Template:Cite IETF", "Template:Webarchive", "Template:Anchor", "Template:Vanchor", "Template:Efn", "Template:Cite journal", "Template:Academic peer reviewed", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:Infobox networking protocol", "Template:IPaddr", "Template:IETF RFC", "Template:Val" ]
https://en.wikipedia.org/wiki/Internet_Protocol_version_4
15,318
IPv6
Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion, and was intended to replace IPv4. In December 1998, IPv6 became a Draft Standard for the IETF, which subsequently ratified it as an Internet Standard on 14 July 2017. Devices on the Internet are assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the IETF had formalized the successor protocol. IPv6 uses 128-bit addresses, theoretically allowing 2, or approximately 3.4×10 total addresses. The actual number is slightly smaller, as multiple ranges are reserved for special usage or completely excluded from general use. The two protocols are not designed to be interoperable, and thus direct communication between them is impossible, complicating the move to IPv6. However, several transition mechanisms have been devised to rectify this. IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol. IPv6 addresses are represented as eight groups of four hexadecimal digits each, separated by colons. The full representation may be shortened; for example, 2001:0db8:0000:0000:0000:8a2e:0370:7334 becomes 2001:db8::8a2e:370:7334. IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks, closely adhering to the design principles developed in the previous version of the protocol, Internet Protocol Version 4 (IPv4). In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address configuration, network renumbering, and router announcements when changing network connectivity providers. It simplifies processing of packets in routers by placing the responsibility for packet fragmentation into the end points. The IPv6 subnet size is standardized by fixing the size of the host identifier portion of an address to 64 bits. The addressing architecture of IPv6 is defined in RFC 4291 and allows three different types of transmission: unicast, anycast and multicast. Internet Protocol Version 4 (IPv4) was the first publicly used version of the Internet Protocol. IPv4 was developed as a research project by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense agency, before becoming the foundation for the Internet and the World Wide Web. IPv4 includes an addressing system that uses numerical identifiers consisting of 32 bits. These addresses are typically displayed in dot-decimal notation as decimal values of four octets, each in the range 0 to 255, or 8 bits per number. Thus, IPv4 provides an addressing capability of 2 or approximately 4.3 billion addresses. Address exhaustion was not initially a concern in IPv4 as this version was originally presumed to be a test of DARPA's networking concepts. During the first decade of operation of the Internet, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the redesign of the addressing system using a classless network model, it became clear that this would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet infrastructure were needed. The last unassigned top-level address blocks of 16 million IPv4 addresses were allocated in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five regional Internet registries (RIRs). However, each RIR still has available address pools and is expected to continue with standard address allocation policies until one /8 Classless Inter-Domain Routing (CIDR) block remains. After that, only blocks of 1,024 addresses (/22) will be provided from the RIRs to a local Internet registry (LIR). As of September 2015, all of Asia-Pacific Network Information Centre (APNIC), the Réseaux IP Européens Network Coordination Centre (RIPE NCC), Latin America and Caribbean Network Information Centre (LACNIC), and American Registry for Internet Numbers (ARIN) have reached this stage. This leaves African Network Information Center (AFRINIC) as the sole regional internet registry that is still using the normal protocol for distributing IPv4 addresses. As of November 2018, AFRINIC's minimum allocation is /22 or 1024 IPv4 addresses. A LIR may receive additional allocation when about 80% of all the address space has been utilized. RIPE NCC announced that it had fully run out of IPv4 addresses on 25 November 2019, and called for greater progress on the adoption of IPv6. It is widely expected that the Internet will use IPv4 alongside IPv6 for the foreseeable future. On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers. Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed Internet-layer addresses, such as File Transfer Protocol (FTP) and Network Time Protocol (NTP), where the new address format may cause conflicts with existing protocol syntax. The main advantage of IPv6 over IPv4 is its larger address space. The size of an IPv6 address is 128 bits, compared to 32 bits in IPv4. The address space therefore has 2=340,282,366,920,938,463,463,374,607,431,768,211,456 addresses (340 undecillion, approximately 3.4×10). Some blocks of this space and some specific addresses are reserved for special uses. While this address space is very large, it was not the intent of the designers of IPv6 to assure geographical saturation with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The standard size of a subnet in IPv6 is 2 addresses, about four billion times the size of the entire IPv4 address space. Thus, actual address space utilization will be small in IPv6, but network management and routing efficiency are improved by the large subnet space and hierarchical route aggregation. Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional (although commonly implemented) feature. IPv6 multicast addressing has features and protocols in common with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result is achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicasting to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions. In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is arcane. Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications. IPv6 hosts configure themselves automatically. Every interface has a self-generated link-local address and, when connected to a network, conflict resolution is performed and routers provide network prefixes via router advertisements. Stateless configuration of routers can be achieved with a special router renumbering protocol. When necessary, hosts may configure additional stateful addresses via Dynamic Host Configuration Protocol version 6 (DHCPv6) or static addresses manually. Like IPv4, IPv6 supports globally unique IP addresses. The design of IPv6 intended to re-emphasize the end-to-end principle of network design that was originally conceived during the establishment of the early Internet by rendering network address translation obsolete. Therefore, every device on the network is globally addressable directly from any other device. A stable, unique, globally addressable IP address would facilitate tracking a device across networks. Therefore, such addresses are a particular privacy concern for mobile devices, such as laptops and cell phones. To address these privacy concerns, the SLAAC protocol includes what are typically called "privacy addresses" or, more correctly, "temporary addresses", codified in RFC 4941, "Privacy Extensions for Stateless Address Autoconfiguration in IPv6". Temporary addresses are random and unstable. A typical consumer device generates a new temporary address daily and will ignore traffic addressed to an old address after one week. Temporary addresses are used by default by Windows since XP SP1, macOS since (Mac OS X) 10.7, Android since 4.0, and iOS since version 4.3. Use of temporary addresses by Linux distributions varies. Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4. With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the least-significant 64 bits of an address) can be independently self-configured by a host. The SLAAC address generation method is implementation-dependent. IETF recommends that addresses be deterministic but semantically opaque. Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory part of all IPv6 protocol implementations, and Internet Key Exchange (IKE) was recommended, but with RFC 6434 the inclusion of IPsec in IPv6 implementations was downgraded to a recommendation because it was considered impractical to require full IPsec implementation for all types of devices that may use IPv6. However, as of RFC 4301 IPv6 protocol implementations that do implement IPsec need to implement IKEv2 and need to support a minimum set of cryptographic algorithms. This requirement will help to make IPsec implementations more interoperable between devices from different vendors. The IPsec Authentication Header (AH) and the Encapsulating Security Payload header (ESP) are implemented as IPv6 extension headers. The packet header in IPv6 is simpler than the IPv4 header. Many rarely used fields have been moved to optional header extensions.The IPv6 packet header has simplified the process of packet forwarding by routers. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, processing of packets that only contain the base IPv6 header by routers may, in some cases, be more efficient, because less processing is required in routers due to the headers being aligned to match common word sizes. However, many devices implement IPv6 support in software (as opposed to hardware), thus resulting in very bad packet processing performance. Additionally, for many implementations, the use of Extension Headers causes packets to be processed by a router's CPU, leading to poor performance or even security issues. Moreover, an IPv6 header does not include a checksum. The IPv4 header checksum is calculated for the IPv4 header, and has to be recalculated by routers every time the time to live (called hop limit in the IPv6 protocol) is reduced by one. The absence of a checksum in the IPv6 header furthers the end-to-end principle of Internet design, which envisioned that most processing in the network occurs in the leaf nodes. Integrity protection for the data that is encapsulated in the IPv6 packet is assumed to be assured by both the link layer or error detection in higher-layer protocols, namely the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) on the transport layer. Thus, while IPv4 allowed UDP datagram headers to have no checksum (indicated by 0 in the header field), IPv6 requires a checksum in UDP headers. IPv6 routers do not perform IP fragmentation. IPv6 hosts are required either to perform path MTU discovery, perform end-to-end fragmentation, or send packets no larger than the default maximum transmission unit (MTU), which is 1280 octets. Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering. The IPv6 packet header has a minimum size of 40 octets (320 bits). Options are implemented as extensions. This provides the opportunity to extend the protocol in the future without affecting the core packet structure. However, RFC 7872 notes that some network operators drop IPv6 packets with extension headers when they traverse transit autonomous systems. IPv4 limits packets to 65,535 (2−1) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4,294,967,295 (2−1) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option extension header. An IPv6 packet has two parts: a header and payload. The header consists of a fixed portion with minimal functionality required for all packets and may be followed by optional extensions to implement special features. The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the source and destination addresses, traffic class, hop count, and the type of the optional extension or payload which follows the header. This Next Header field tells the receiver how to interpret the data which follows the header. If the packet contains options, this field contains the option type of the next option. The "Next Header" field of the last option points to the upper-layer protocol that is carried in the packet's payload. The current use of the IPv6 Traffic Class field divides this between a 6 bit Differentiated Services Code Point and a 2-bit Explicit Congestion Notification field. Extension headers carry options that are used for special treatment of a packet in the network, e.g., for routing, fragmentation, and for security using the IPsec framework. Without special options, a payload must be less than 64kB. With a Jumbo Payload option (in a Hop-By-Hop Options extension header), the payload must be less than 4 GB. Unlike with IPv4, routers never fragment a packet. Hosts are expected to use Path MTU Discovery to make their packets small enough to reach the destination without needing to be fragmented. See IPv6 packet fragmentation. IPv6 addresses have 128 bits. The design of the IPv6 address space implements a different design philosophy than in IPv4, in which subnetting was used to improve the efficiency of utilization of the small address space. In IPv6, the address space is deemed large enough for the foreseeable future, and a local area subnet always uses 64 bits for the host portion of the address, designated as the interface identifier, while the most-significant 64 bits are used as the routing prefix. While the myth has existed regarding IPv6 subnets being impossible to scan, RFC 7707 notes that patterns resulting from some IPv6 address configuration techniques and algorithms allow address scanning in many real-world scenarios. The 128 bits of an IPv6 address are represented in 8 groups of 16 bits each. Each group is written as four hexadecimal digits (sometimes called hextets or more formally hexadectets and informally a quibble or quad-nibble) and the groups are separated by colons (:). An example of this representation is 2001:0db8:0000:0000:0000:ff00:0042:8329. For convenience and clarity, the representation of an IPv6 address may be shortened with the following rules: An example of application of these rules: The loopback address 0000:0000:0000:0000:0000:0000:0000:0001 is defined in RFC 5156 and is abbreviated to ::1 by using both rules. As an IPv6 address may have more than one representation, the IETF has issued a proposed standard for representing them in text. Because IPv6 addresses contain colons, and URLs use colons to separate the host from the port number, RFC2732 specifies that an IPv6 address used as the host-part of a URL should be enclosed in square brackets, e.g. http://[2001:db8:4006:812::200e] or http://[2001:db8:4006:812::200e]:8080/path/page.html. All interfaces of IPv6 hosts require a link-local address, which have the prefix fe80::/10. This prefix is followed by 54 bits that can be used for subnetting, although they are typically set to zeros, and a 64-bit interface identifier. The host can compute and assign the Interface identifier by itself without the presence or cooperation of an external network component like a DHCP server, in a process called link-local address autoconfiguration. The lower 64 bits of the link-local address (the suffix) were originally derived from the MAC address of the underlying network interface card. As this method of assigning addresses would cause undesirable address changes when faulty network cards were replaced, and as it also suffered from a number of security and privacy issues, RFC 8064 has replaced the original MAC-based method with the hash-based method specified in RFC 7217. IPv6 uses a new mechanism for mapping IP addresses to link-layer addresses (e.g. MAC addresses), because it does not support the broadcast addressing method, on which the functionality of the Address Resolution Protocol (ARP) in IPv4 is based. IPv6 implements the Neighbor Discovery Protocol (NDP, ND) in the link layer, which relies on ICMPv6 and multicast transmission. IPv6 hosts verify the uniqueness of their IPv6 addresses in a local area network (LAN) by sending a neighbor solicitation message asking for the link-layer address of the IP address. If any other host in the LAN is using that address, it responds. A host bringing up a new IPv6 interface first generates a unique link-local address using one of several mechanisms designed to generate a unique address. Should a non-unique address be detected, the host can try again with a newly generated address. Once a unique link-local address is established, the IPv6 host determines whether the LAN is connected on this link to any router interface that supports IPv6. It does so by sending out an ICMPv6 router solicitation message to the all-routers multicast group with its link-local address as source. If there is no answer after a predetermined number of attempts, the host concludes that no routers are connected. If it does get a response, known as a router advertisement, from a router, the response includes the network configuration information to allow establishment of a globally unique address with an appropriate unicast network prefix. There are also two flag bits that tell the host whether it should use DHCP to get further information and addresses: The assignment procedure for global addresses is similar to local-address construction. The prefix is supplied from router advertisements on the network. Multiple prefix announcements cause multiple addresses to be configured. Stateless address autoconfiguration (SLAAC) requires a 64 address block, as defined in RFC 4291. Local Internet registries are assigned at least 32 blocks, which they divide among subordinate networks. The initial recommendation stated assignment of a 48 subnet to end-consumer sites (RFC 3177). This was replaced by RFC 6177, which "recommends giving home sites significantly more than a single 64, but does not recommend that every home site be given a 48 either". 56s are specifically considered. It remains to be seen whether ISPs will honor this recommendation. For example, during initial trials, Comcast customers were given a single 64 network. In the Domain Name System (DNS), hostnames are mapped to IPv6 addresses by AAAA ("quad-A") resource records. For reverse resolution, the IETF reserved the domain ip6.arpa, where the name space is hierarchically divided by the 1-digit hexadecimal representation of nibble units (4 bits) of the IPv6 address. This scheme is defined in RFC 3596. When a dual-stack host queries a DNS server to resolve a fully qualified domain name (FQDN), the DNS client of the host sends two DNS requests, one querying A records and the other querying AAAA records. The host operating system may be configured with a preference for address selection rules RFC 6724. An alternate record type was used in early DNS implementations for IPv6, designed to facilitate network renumbering, the A6 records for the forward lookup and a number of other innovations such as bit-string labels and DNAME records. It is defined in RFC 2874 and its references (with further discussion of the pros and cons of both schemes in RFC 3364), but has been deprecated to experimental status (RFC 3363). IPv6 is not foreseen to supplant IPv4 instantaneously. Both protocols will continue to operate simultaneously for some time. Therefore, IPv6 transition mechanisms are needed to enable IPv6 hosts to reach IPv4 services and to allow isolated IPv6 hosts and networks to reach each other over IPv4 infrastructure. According to Silvia Hagen, a dual-stack implementation of the IPv4 and IPv6 on devices is the easiest way to migrate to IPv6. Many other transition mechanisms use tunneling to encapsulate IPv6 traffic within IPv4 networks and vice versa. This is an imperfect solution, which reduces the maximum transmission unit (MTU) of a link and therefore complicates Path MTU Discovery, and may increase latency. Dual-stack IP implementations provide complete IPv4 and IPv6 protocol stacks in the operating system of a computer or network device on top of the common physical layer implementation, such as Ethernet. This permits dual-stack hosts to participate in IPv6 and IPv4 networks simultaneously. The method is defined in RFC 4213. A device with dual-stack implementation in the operating system has an IPv4 and IPv6 address, and can communicate with other nodes in the LAN or the Internet using either IPv4 or IPv6. The DNS protocol is used by both IP protocols to resolve fully qualified domain names and IP addresses, but dual stack requires that the resolving DNS server can resolve both types of addresses. Such a dual-stack DNS server holds IPv4 addresses in the A records and IPv6 addresses in the AAAA records. Depending on the destination that is to be resolved, a DNS name server may return an IPv4 or IPv6 IP address, or both. A default address selection mechanism, or preferred protocol, needs to be configured either on hosts or the DNS server. The IETF has published Happy Eyeballs to assist dual-stack applications, so that they can connect using both IPv4 and IPv6, but prefer an IPv6 connection if it is available. However, dual-stack also needs to be implemented on all routers between the host and the service for which the DNS server has returned an IPv6 address. Dual-stack clients should be configured to prefer IPv6 only if the network is able to forward IPv6 packets using the IPv6 versions of routing protocols. When dual-stack network protocols are in place the application layer can be migrated to IPv6. While dual-stack is supported by major operating system and network device vendors, legacy networking hardware and servers do not support IPv6. Internet service providers (ISPs) are increasingly providing their business and private customers with public-facing IPv6 global unicast addresses. If IPv4 is still used in the local area network (LAN), however, and the ISP can only provide one public-facing IPv6 address, the IPv4 LAN addresses are translated into the public facing IPv6 address using NAT64, a network address translation (NAT) mechanism. Some ISPs cannot provide their customers with public-facing IPv4 and IPv6 addresses, thus supporting dual-stack networking, because some ISPs have exhausted their globally routable IPv4 address pool. Meanwhile, ISP customers are still trying to reach IPv4 web servers and other destinations. A significant percentage of ISPs in all regional Internet registry (RIR) zones have obtained IPv6 address space. This includes many of the world's major ISPs and mobile network operators, such as Verizon Wireless, StarHub Cable, Chubu Telecommunications, Kabel Deutschland, Swisscom, T-Mobile, Internode and Telefónica. While some ISPs still allocate customers only IPv4 addresses, many ISPs allocate their customers only an IPv6 or dual-stack IPv4 and IPv6. ISPs report the share of IPv6 traffic from customers over their network to be anything between 20% and 40%, but by mid-2017 IPv6 traffic still only accounted for a fraction of total traffic at several large Internet exchange points (IXPs). AMS-IX reported it to be 2% and SeattleIX reported 7%. A 2017 survey found that many DSL customers that were served by a dual stack ISP did not request DNS servers to resolve fully qualified domain names into IPv6 addresses. The survey also found that the majority of traffic from IPv6-ready web-server resources were still requested and served over IPv4, mostly due to ISP customers that did not use the dual stack facility provided by their ISP and to a lesser extent due to customers of IPv4-only ISPs. The technical basis for tunneling, or encapsulating IPv6 packets in IPv4 packets, is outlined in RFC 4213. When the Internet backbone was IPv4-only, one of the frequently used tunneling protocols was 6to4. Teredo tunneling was also frequently used for integrating IPv6 LANs with the IPv4 Internet backbone. Teredo is outlined in RFC 4380 and allows IPv6 local area networks to tunnel over IPv4 networks, by encapsulating IPv6 packets within UDP. The Teredo relay is an IPv6 router that mediates between a Teredo server and the native IPv6 network. It was expected that 6to4 and Teredo would be widely deployed until ISP networks would switch to native IPv6, but by 2014 Google Statistics showed that the use of both mechanisms had dropped to almost 0. Hybrid dual-stack IPv6/IPv4 implementations recognize a special class of addresses, the IPv4-mapped IPv6 addresses. These addresses are typically written with a 96-bit prefix in the standard IPv6 format, and the remaining 32 bits are written in the customary dot-decimal notation of IPv4. Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated. Because of the significant internal differences between IPv4 and IPv6 protocol stacks, some of the lower-level functionality available to programmers in the IPv6 stack does not work the same when used with IPv4-mapped addresses. Some common IPv6 stacks do not implement the IPv4-mapped address feature, either because the IPv6 and IPv4 stacks are separate implementations (e.g., Microsoft Windows 2000, XP, and Server 2003), or because of security concerns (OpenBSD). On these operating systems, a program must open a separate socket for each IP protocol it uses. On some systems, e.g., the Linux kernel, NetBSD, and FreeBSD, this feature is controlled by the socket option IPV6_V6ONLY. The address prefix 64:ff9b::/96 is a class of IPv4-embedded IPv6 addresses for use in NAT64 transition methods. For example, 64:ff9b::192.0.2.128 represents the IPv4 address 192.0.2.128. A number of security implications may arise from the use of IPv6. Some of them may be related with the IPv6 protocols themselves, while others may be related with implementation flaws. The addition of nodes having IPv6 enabled by default by the software manufacturer may result in the inadvertent creation of shadow networks, causing IPv6 traffic flowing into networks having only IPv4 security management in place. This may also occur with operating system upgrades, when the newer operating system enables IPv6 by default, while the older one did not. Failing to update the security infrastructure to accommodate IPv6 can lead to IPv6 traffic bypassing it. Shadow networks have occurred on business networks in which enterprises are replacing Windows XP systems that do not have an IPv6 stack enabled by default, with Windows 7 systems, that do. Some IPv6 stack implementors have therefore recommended disabling IPv4 mapped addresses and instead using a dual-stack network where supporting both IPv4 and IPv6 is necessary. Research has shown that the use of fragmentation can be leveraged to evade network security controls, similar to IPv4. As a result, RFC 7112 requires that the first fragment of an IPv6 packet contains the entire IPv6 header chain, such that some very pathological fragmentation cases are forbidden. Additionally, as a result of research on the evasion of RA-Guard in RFC 7113, RFC 6980 has deprecated the use of fragmentation with Neighbor Discovery, and discouraged the use of fragmentation with Secure Neighbor Discovery (SEND). Due to the anticipated global growth of the Internet, the Internet Engineering Task Force (IETF) in the early 1990s started an effort to develop a next generation IP protocol. By the beginning of 1992, several proposals appeared for an expanded Internet addressing system and by the end of 1992 the IETF announced a call for white papers. In September 1993, the IETF created a temporary, ad hoc IP Next Generation (IPng) area to deal specifically with such issues. The new area was led by Allison Mankin and Scott Bradner, and had a directorate with 15 engineers from diverse backgrounds for direction-setting and preliminary document review: The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox). The Internet Engineering Task Force adopted the IPng model on 25 July 1994, with the formation of several IPng working groups. By 1996, a series of RFCs was released defining Internet Protocol version 6 (IPv6), starting with RFC 1883. (Version 5 was used by the experimental Internet Stream Protocol.) The first RFC to standardize IPv6 was the RFC 1883 in 1995, which became obsoleted by RFC 2460 in 1998. In July 2017 this RFC was superseded by RFC 8200, which elevated IPv6 to "Internet Standard" (the highest maturity level for IETF protocols). The 1993 introduction of Classless Inter-Domain Routing (CIDR) in the routing and IP address allocation for the Internet, and the extensive use of network address translation (NAT), delayed IPv4 address exhaustion to allow for IPv6 deployment, which began in the mid-2000s. Universities were among the early adopters of IPv6. Virginia Tech deployed IPv6 at a trial location in 2004 and later expanded IPv6 deployment across the campus network. By 2016, 82% of the traffic on their network used IPv6. Imperial College London began experimental IPv6 deployment in 2003 and by 2016 the IPv6 traffic on their networks averaged between 20% and 40%. A significant portion of this IPv6 traffic was generated through their high energy physics collaboration with CERN, which relies entirely on IPv6. The Domain Name System (DNS) has supported IPv6 since 2008. In the same year, IPv6 was first used in a major world event during the Beijing 2008 Summer Olympics. By 2011, all major operating systems in use on personal computers and server systems had production-quality IPv6 implementations. Cellular telephone systems presented a large deployment field for Internet Protocol devices as mobile telephone service made the transition from 3G to 4G technologies, in which voice is provisioned as a voice over IP (VoIP) service that would leverage IPv6 enhancements. In 2009, the US cellular operator Verizon released technical specifications for devices to operate on its "next-generation" networks. The specification mandated IPv6 operation according to the 3GPP Release 8 Specifications (March 2009), and deprecated IPv4 as an optional capability. The deployment of IPv6 in the Internet backbone continued. In 2018 only 25.3% of the about 54,000 autonomous systems advertised both IPv4 and IPv6 prefixes in the global Border Gateway Protocol (BGP) routing database. A further 243 networks advertised only an IPv6 prefix. Internet backbone transit networks offering IPv6 support existed in every country globally, except in parts of Africa, the Middle East and China. By mid-2018 some major European broadband ISPs had deployed IPv6 for the majority of their customers. Sky UK provided over 86% of its customers with IPv6, Deutsche Telekom had 56% deployment of IPv6, XS4ALL in the Netherlands had 73% deployment and in Belgium the broadband ISPs VOO and Telenet had 73% and 63% IPv6 deployment respectively. In the United States the broadband ISP Xfinity had an IPv6 deployment of about 66%. In 2018 Xfinity reported an estimated 36.1 million IPv6 users, while AT&T reported 22.3 million IPv6 users.
[ { "paragraph_id": 0, "text": "Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion, and was intended to replace IPv4. In December 1998, IPv6 became a Draft Standard for the IETF, which subsequently ratified it as an Internet Standard on 14 July 2017.", "title": "" }, { "paragraph_id": 1, "text": "Devices on the Internet are assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the IETF had formalized the successor protocol. IPv6 uses 128-bit addresses, theoretically allowing 2, or approximately 3.4×10 total addresses. The actual number is slightly smaller, as multiple ranges are reserved for special usage or completely excluded from general use. The two protocols are not designed to be interoperable, and thus direct communication between them is impossible, complicating the move to IPv6. However, several transition mechanisms have been devised to rectify this.", "title": "" }, { "paragraph_id": 2, "text": "IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol.", "title": "" }, { "paragraph_id": 3, "text": "IPv6 addresses are represented as eight groups of four hexadecimal digits each, separated by colons. The full representation may be shortened; for example, 2001:0db8:0000:0000:0000:8a2e:0370:7334 becomes 2001:db8::8a2e:370:7334.", "title": "" }, { "paragraph_id": 4, "text": "IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks, closely adhering to the design principles developed in the previous version of the protocol, Internet Protocol Version 4 (IPv4).", "title": "Main features" }, { "paragraph_id": 5, "text": "In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address configuration, network renumbering, and router announcements when changing network connectivity providers. It simplifies processing of packets in routers by placing the responsibility for packet fragmentation into the end points. The IPv6 subnet size is standardized by fixing the size of the host identifier portion of an address to 64 bits.", "title": "Main features" }, { "paragraph_id": 6, "text": "The addressing architecture of IPv6 is defined in RFC 4291 and allows three different types of transmission: unicast, anycast and multicast.", "title": "Main features" }, { "paragraph_id": 7, "text": "Internet Protocol Version 4 (IPv4) was the first publicly used version of the Internet Protocol. IPv4 was developed as a research project by the Defense Advanced Research Projects Agency (DARPA), a United States Department of Defense agency, before becoming the foundation for the Internet and the World Wide Web. IPv4 includes an addressing system that uses numerical identifiers consisting of 32 bits. These addresses are typically displayed in dot-decimal notation as decimal values of four octets, each in the range 0 to 255, or 8 bits per number. Thus, IPv4 provides an addressing capability of 2 or approximately 4.3 billion addresses. Address exhaustion was not initially a concern in IPv4 as this version was originally presumed to be a test of DARPA's networking concepts. During the first decade of operation of the Internet, it became apparent that methods had to be developed to conserve address space. In the early 1990s, even after the redesign of the addressing system using a classless network model, it became clear that this would not suffice to prevent IPv4 address exhaustion, and that further changes to the Internet infrastructure were needed.", "title": "Motivation and origin" }, { "paragraph_id": 8, "text": "The last unassigned top-level address blocks of 16 million IPv4 addresses were allocated in February 2011 by the Internet Assigned Numbers Authority (IANA) to the five regional Internet registries (RIRs). However, each RIR still has available address pools and is expected to continue with standard address allocation policies until one /8 Classless Inter-Domain Routing (CIDR) block remains. After that, only blocks of 1,024 addresses (/22) will be provided from the RIRs to a local Internet registry (LIR). As of September 2015, all of Asia-Pacific Network Information Centre (APNIC), the Réseaux IP Européens Network Coordination Centre (RIPE NCC), Latin America and Caribbean Network Information Centre (LACNIC), and American Registry for Internet Numbers (ARIN) have reached this stage. This leaves African Network Information Center (AFRINIC) as the sole regional internet registry that is still using the normal protocol for distributing IPv4 addresses. As of November 2018, AFRINIC's minimum allocation is /22 or 1024 IPv4 addresses. A LIR may receive additional allocation when about 80% of all the address space has been utilized.", "title": "Motivation and origin" }, { "paragraph_id": 9, "text": "RIPE NCC announced that it had fully run out of IPv4 addresses on 25 November 2019, and called for greater progress on the adoption of IPv6.", "title": "Motivation and origin" }, { "paragraph_id": 10, "text": "It is widely expected that the Internet will use IPv4 alongside IPv6 for the foreseeable future.", "title": "Motivation and origin" }, { "paragraph_id": 11, "text": "On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers. Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed Internet-layer addresses, such as File Transfer Protocol (FTP) and Network Time Protocol (NTP), where the new address format may cause conflicts with existing protocol syntax.", "title": "Comparison with IPv4" }, { "paragraph_id": 12, "text": "The main advantage of IPv6 over IPv4 is its larger address space. The size of an IPv6 address is 128 bits, compared to 32 bits in IPv4. The address space therefore has 2=340,282,366,920,938,463,463,374,607,431,768,211,456 addresses (340 undecillion, approximately 3.4×10). Some blocks of this space and some specific addresses are reserved for special uses.", "title": "Comparison with IPv4" }, { "paragraph_id": 13, "text": "While this address space is very large, it was not the intent of the designers of IPv6 to assure geographical saturation with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The standard size of a subnet in IPv6 is 2 addresses, about four billion times the size of the entire IPv4 address space. Thus, actual address space utilization will be small in IPv6, but network management and routing efficiency are improved by the large subnet space and hierarchical route aggregation.", "title": "Comparison with IPv4" }, { "paragraph_id": 14, "text": "Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional (although commonly implemented) feature. IPv6 multicast addressing has features and protocols in common with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result is achieved by sending a packet to the link-local all nodes multicast group at address ff02::1, which is analogous to IPv4 multicasting to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions.", "title": "Comparison with IPv4" }, { "paragraph_id": 15, "text": "In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is arcane. Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications.", "title": "Comparison with IPv4" }, { "paragraph_id": 16, "text": "IPv6 hosts configure themselves automatically. Every interface has a self-generated link-local address and, when connected to a network, conflict resolution is performed and routers provide network prefixes via router advertisements. Stateless configuration of routers can be achieved with a special router renumbering protocol. When necessary, hosts may configure additional stateful addresses via Dynamic Host Configuration Protocol version 6 (DHCPv6) or static addresses manually.", "title": "Comparison with IPv4" }, { "paragraph_id": 17, "text": "Like IPv4, IPv6 supports globally unique IP addresses. The design of IPv6 intended to re-emphasize the end-to-end principle of network design that was originally conceived during the establishment of the early Internet by rendering network address translation obsolete. Therefore, every device on the network is globally addressable directly from any other device.", "title": "Comparison with IPv4" }, { "paragraph_id": 18, "text": "A stable, unique, globally addressable IP address would facilitate tracking a device across networks. Therefore, such addresses are a particular privacy concern for mobile devices, such as laptops and cell phones. To address these privacy concerns, the SLAAC protocol includes what are typically called \"privacy addresses\" or, more correctly, \"temporary addresses\", codified in RFC 4941, \"Privacy Extensions for Stateless Address Autoconfiguration in IPv6\". Temporary addresses are random and unstable. A typical consumer device generates a new temporary address daily and will ignore traffic addressed to an old address after one week. Temporary addresses are used by default by Windows since XP SP1, macOS since (Mac OS X) 10.7, Android since 4.0, and iOS since version 4.3. Use of temporary addresses by Linux distributions varies.", "title": "Comparison with IPv4" }, { "paragraph_id": 19, "text": "Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4. With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the least-significant 64 bits of an address) can be independently self-configured by a host.", "title": "Comparison with IPv4" }, { "paragraph_id": 20, "text": "The SLAAC address generation method is implementation-dependent. IETF recommends that addresses be deterministic but semantically opaque.", "title": "Comparison with IPv4" }, { "paragraph_id": 21, "text": "Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory part of all IPv6 protocol implementations, and Internet Key Exchange (IKE) was recommended, but with RFC 6434 the inclusion of IPsec in IPv6 implementations was downgraded to a recommendation because it was considered impractical to require full IPsec implementation for all types of devices that may use IPv6. However, as of RFC 4301 IPv6 protocol implementations that do implement IPsec need to implement IKEv2 and need to support a minimum set of cryptographic algorithms. This requirement will help to make IPsec implementations more interoperable between devices from different vendors. The IPsec Authentication Header (AH) and the Encapsulating Security Payload header (ESP) are implemented as IPv6 extension headers.", "title": "Comparison with IPv4" }, { "paragraph_id": 22, "text": "The packet header in IPv6 is simpler than the IPv4 header. Many rarely used fields have been moved to optional header extensions.The IPv6 packet header has simplified the process of packet forwarding by routers. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, processing of packets that only contain the base IPv6 header by routers may, in some cases, be more efficient, because less processing is required in routers due to the headers being aligned to match common word sizes. However, many devices implement IPv6 support in software (as opposed to hardware), thus resulting in very bad packet processing performance. Additionally, for many implementations, the use of Extension Headers causes packets to be processed by a router's CPU, leading to poor performance or even security issues.", "title": "Comparison with IPv4" }, { "paragraph_id": 23, "text": "Moreover, an IPv6 header does not include a checksum. The IPv4 header checksum is calculated for the IPv4 header, and has to be recalculated by routers every time the time to live (called hop limit in the IPv6 protocol) is reduced by one. The absence of a checksum in the IPv6 header furthers the end-to-end principle of Internet design, which envisioned that most processing in the network occurs in the leaf nodes. Integrity protection for the data that is encapsulated in the IPv6 packet is assumed to be assured by both the link layer or error detection in higher-layer protocols, namely the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) on the transport layer. Thus, while IPv4 allowed UDP datagram headers to have no checksum (indicated by 0 in the header field), IPv6 requires a checksum in UDP headers.", "title": "Comparison with IPv4" }, { "paragraph_id": 24, "text": "IPv6 routers do not perform IP fragmentation. IPv6 hosts are required either to perform path MTU discovery, perform end-to-end fragmentation, or send packets no larger than the default maximum transmission unit (MTU), which is 1280 octets.", "title": "Comparison with IPv4" }, { "paragraph_id": 25, "text": "Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering.", "title": "Comparison with IPv4" }, { "paragraph_id": 26, "text": "The IPv6 packet header has a minimum size of 40 octets (320 bits). Options are implemented as extensions. This provides the opportunity to extend the protocol in the future without affecting the core packet structure. However, RFC 7872 notes that some network operators drop IPv6 packets with extension headers when they traverse transit autonomous systems.", "title": "Comparison with IPv4" }, { "paragraph_id": 27, "text": "IPv4 limits packets to 65,535 (2−1) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4,294,967,295 (2−1) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option extension header.", "title": "Comparison with IPv4" }, { "paragraph_id": 28, "text": "An IPv6 packet has two parts: a header and payload.", "title": "IPv6 packets" }, { "paragraph_id": 29, "text": "The header consists of a fixed portion with minimal functionality required for all packets and may be followed by optional extensions to implement special features.", "title": "IPv6 packets" }, { "paragraph_id": 30, "text": "The fixed header occupies the first 40 octets (320 bits) of the IPv6 packet. It contains the source and destination addresses, traffic class, hop count, and the type of the optional extension or payload which follows the header. This Next Header field tells the receiver how to interpret the data which follows the header. If the packet contains options, this field contains the option type of the next option. The \"Next Header\" field of the last option points to the upper-layer protocol that is carried in the packet's payload.", "title": "IPv6 packets" }, { "paragraph_id": 31, "text": "The current use of the IPv6 Traffic Class field divides this between a 6 bit Differentiated Services Code Point and a 2-bit Explicit Congestion Notification field.", "title": "IPv6 packets" }, { "paragraph_id": 32, "text": "Extension headers carry options that are used for special treatment of a packet in the network, e.g., for routing, fragmentation, and for security using the IPsec framework.", "title": "IPv6 packets" }, { "paragraph_id": 33, "text": "Without special options, a payload must be less than 64kB. With a Jumbo Payload option (in a Hop-By-Hop Options extension header), the payload must be less than 4 GB.", "title": "IPv6 packets" }, { "paragraph_id": 34, "text": "Unlike with IPv4, routers never fragment a packet. Hosts are expected to use Path MTU Discovery to make their packets small enough to reach the destination without needing to be fragmented. See IPv6 packet fragmentation.", "title": "IPv6 packets" }, { "paragraph_id": 35, "text": "IPv6 addresses have 128 bits. The design of the IPv6 address space implements a different design philosophy than in IPv4, in which subnetting was used to improve the efficiency of utilization of the small address space. In IPv6, the address space is deemed large enough for the foreseeable future, and a local area subnet always uses 64 bits for the host portion of the address, designated as the interface identifier, while the most-significant 64 bits are used as the routing prefix. While the myth has existed regarding IPv6 subnets being impossible to scan, RFC 7707 notes that patterns resulting from some IPv6 address configuration techniques and algorithms allow address scanning in many real-world scenarios.", "title": "Addressing" }, { "paragraph_id": 36, "text": "The 128 bits of an IPv6 address are represented in 8 groups of 16 bits each. Each group is written as four hexadecimal digits (sometimes called hextets or more formally hexadectets and informally a quibble or quad-nibble) and the groups are separated by colons (:). An example of this representation is 2001:0db8:0000:0000:0000:ff00:0042:8329.", "title": "Addressing" }, { "paragraph_id": 37, "text": "For convenience and clarity, the representation of an IPv6 address may be shortened with the following rules:", "title": "Addressing" }, { "paragraph_id": 38, "text": "An example of application of these rules:", "title": "Addressing" }, { "paragraph_id": 39, "text": "The loopback address 0000:0000:0000:0000:0000:0000:0000:0001 is defined in RFC 5156 and is abbreviated to ::1 by using both rules.", "title": "Addressing" }, { "paragraph_id": 40, "text": "As an IPv6 address may have more than one representation, the IETF has issued a proposed standard for representing them in text.", "title": "Addressing" }, { "paragraph_id": 41, "text": "Because IPv6 addresses contain colons, and URLs use colons to separate the host from the port number, RFC2732 specifies that an IPv6 address used as the host-part of a URL should be enclosed in square brackets, e.g. http://[2001:db8:4006:812::200e] or http://[2001:db8:4006:812::200e]:8080/path/page.html.", "title": "Addressing" }, { "paragraph_id": 42, "text": "All interfaces of IPv6 hosts require a link-local address, which have the prefix fe80::/10. This prefix is followed by 54 bits that can be used for subnetting, although they are typically set to zeros, and a 64-bit interface identifier. The host can compute and assign the Interface identifier by itself without the presence or cooperation of an external network component like a DHCP server, in a process called link-local address autoconfiguration.", "title": "Addressing" }, { "paragraph_id": 43, "text": "The lower 64 bits of the link-local address (the suffix) were originally derived from the MAC address of the underlying network interface card. As this method of assigning addresses would cause undesirable address changes when faulty network cards were replaced, and as it also suffered from a number of security and privacy issues, RFC 8064 has replaced the original MAC-based method with the hash-based method specified in RFC 7217.", "title": "Addressing" }, { "paragraph_id": 44, "text": "IPv6 uses a new mechanism for mapping IP addresses to link-layer addresses (e.g. MAC addresses), because it does not support the broadcast addressing method, on which the functionality of the Address Resolution Protocol (ARP) in IPv4 is based. IPv6 implements the Neighbor Discovery Protocol (NDP, ND) in the link layer, which relies on ICMPv6 and multicast transmission. IPv6 hosts verify the uniqueness of their IPv6 addresses in a local area network (LAN) by sending a neighbor solicitation message asking for the link-layer address of the IP address. If any other host in the LAN is using that address, it responds.", "title": "Addressing" }, { "paragraph_id": 45, "text": "A host bringing up a new IPv6 interface first generates a unique link-local address using one of several mechanisms designed to generate a unique address. Should a non-unique address be detected, the host can try again with a newly generated address. Once a unique link-local address is established, the IPv6 host determines whether the LAN is connected on this link to any router interface that supports IPv6. It does so by sending out an ICMPv6 router solicitation message to the all-routers multicast group with its link-local address as source. If there is no answer after a predetermined number of attempts, the host concludes that no routers are connected. If it does get a response, known as a router advertisement, from a router, the response includes the network configuration information to allow establishment of a globally unique address with an appropriate unicast network prefix. There are also two flag bits that tell the host whether it should use DHCP to get further information and addresses:", "title": "Addressing" }, { "paragraph_id": 46, "text": "The assignment procedure for global addresses is similar to local-address construction. The prefix is supplied from router advertisements on the network. Multiple prefix announcements cause multiple addresses to be configured.", "title": "Addressing" }, { "paragraph_id": 47, "text": "Stateless address autoconfiguration (SLAAC) requires a 64 address block, as defined in RFC 4291. Local Internet registries are assigned at least 32 blocks, which they divide among subordinate networks. The initial recommendation stated assignment of a 48 subnet to end-consumer sites (RFC 3177). This was replaced by RFC 6177, which \"recommends giving home sites significantly more than a single 64, but does not recommend that every home site be given a 48 either\". 56s are specifically considered. It remains to be seen whether ISPs will honor this recommendation. For example, during initial trials, Comcast customers were given a single 64 network.", "title": "Addressing" }, { "paragraph_id": 48, "text": "In the Domain Name System (DNS), hostnames are mapped to IPv6 addresses by AAAA (\"quad-A\") resource records. For reverse resolution, the IETF reserved the domain ip6.arpa, where the name space is hierarchically divided by the 1-digit hexadecimal representation of nibble units (4 bits) of the IPv6 address. This scheme is defined in RFC 3596.", "title": "IPv6 in the Domain Name System" }, { "paragraph_id": 49, "text": "When a dual-stack host queries a DNS server to resolve a fully qualified domain name (FQDN), the DNS client of the host sends two DNS requests, one querying A records and the other querying AAAA records. The host operating system may be configured with a preference for address selection rules RFC 6724.", "title": "IPv6 in the Domain Name System" }, { "paragraph_id": 50, "text": "An alternate record type was used in early DNS implementations for IPv6, designed to facilitate network renumbering, the A6 records for the forward lookup and a number of other innovations such as bit-string labels and DNAME records. It is defined in RFC 2874 and its references (with further discussion of the pros and cons of both schemes in RFC 3364), but has been deprecated to experimental status (RFC 3363).", "title": "IPv6 in the Domain Name System" }, { "paragraph_id": 51, "text": "IPv6 is not foreseen to supplant IPv4 instantaneously. Both protocols will continue to operate simultaneously for some time. Therefore, IPv6 transition mechanisms are needed to enable IPv6 hosts to reach IPv4 services and to allow isolated IPv6 hosts and networks to reach each other over IPv4 infrastructure.", "title": "Transition mechanisms" }, { "paragraph_id": 52, "text": "According to Silvia Hagen, a dual-stack implementation of the IPv4 and IPv6 on devices is the easiest way to migrate to IPv6. Many other transition mechanisms use tunneling to encapsulate IPv6 traffic within IPv4 networks and vice versa. This is an imperfect solution, which reduces the maximum transmission unit (MTU) of a link and therefore complicates Path MTU Discovery, and may increase latency.", "title": "Transition mechanisms" }, { "paragraph_id": 53, "text": "Dual-stack IP implementations provide complete IPv4 and IPv6 protocol stacks in the operating system of a computer or network device on top of the common physical layer implementation, such as Ethernet. This permits dual-stack hosts to participate in IPv6 and IPv4 networks simultaneously. The method is defined in RFC 4213.", "title": "Transition mechanisms" }, { "paragraph_id": 54, "text": "A device with dual-stack implementation in the operating system has an IPv4 and IPv6 address, and can communicate with other nodes in the LAN or the Internet using either IPv4 or IPv6. The DNS protocol is used by both IP protocols to resolve fully qualified domain names and IP addresses, but dual stack requires that the resolving DNS server can resolve both types of addresses. Such a dual-stack DNS server holds IPv4 addresses in the A records and IPv6 addresses in the AAAA records. Depending on the destination that is to be resolved, a DNS name server may return an IPv4 or IPv6 IP address, or both. A default address selection mechanism, or preferred protocol, needs to be configured either on hosts or the DNS server. The IETF has published Happy Eyeballs to assist dual-stack applications, so that they can connect using both IPv4 and IPv6, but prefer an IPv6 connection if it is available. However, dual-stack also needs to be implemented on all routers between the host and the service for which the DNS server has returned an IPv6 address. Dual-stack clients should be configured to prefer IPv6 only if the network is able to forward IPv6 packets using the IPv6 versions of routing protocols. When dual-stack network protocols are in place the application layer can be migrated to IPv6.", "title": "Transition mechanisms" }, { "paragraph_id": 55, "text": "While dual-stack is supported by major operating system and network device vendors, legacy networking hardware and servers do not support IPv6.", "title": "Transition mechanisms" }, { "paragraph_id": 56, "text": "Internet service providers (ISPs) are increasingly providing their business and private customers with public-facing IPv6 global unicast addresses. If IPv4 is still used in the local area network (LAN), however, and the ISP can only provide one public-facing IPv6 address, the IPv4 LAN addresses are translated into the public facing IPv6 address using NAT64, a network address translation (NAT) mechanism. Some ISPs cannot provide their customers with public-facing IPv4 and IPv6 addresses, thus supporting dual-stack networking, because some ISPs have exhausted their globally routable IPv4 address pool. Meanwhile, ISP customers are still trying to reach IPv4 web servers and other destinations.", "title": "Transition mechanisms" }, { "paragraph_id": 57, "text": "A significant percentage of ISPs in all regional Internet registry (RIR) zones have obtained IPv6 address space. This includes many of the world's major ISPs and mobile network operators, such as Verizon Wireless, StarHub Cable, Chubu Telecommunications, Kabel Deutschland, Swisscom, T-Mobile, Internode and Telefónica.", "title": "Transition mechanisms" }, { "paragraph_id": 58, "text": "While some ISPs still allocate customers only IPv4 addresses, many ISPs allocate their customers only an IPv6 or dual-stack IPv4 and IPv6. ISPs report the share of IPv6 traffic from customers over their network to be anything between 20% and 40%, but by mid-2017 IPv6 traffic still only accounted for a fraction of total traffic at several large Internet exchange points (IXPs). AMS-IX reported it to be 2% and SeattleIX reported 7%. A 2017 survey found that many DSL customers that were served by a dual stack ISP did not request DNS servers to resolve fully qualified domain names into IPv6 addresses. The survey also found that the majority of traffic from IPv6-ready web-server resources were still requested and served over IPv4, mostly due to ISP customers that did not use the dual stack facility provided by their ISP and to a lesser extent due to customers of IPv4-only ISPs.", "title": "Transition mechanisms" }, { "paragraph_id": 59, "text": "The technical basis for tunneling, or encapsulating IPv6 packets in IPv4 packets, is outlined in RFC 4213. When the Internet backbone was IPv4-only, one of the frequently used tunneling protocols was 6to4. Teredo tunneling was also frequently used for integrating IPv6 LANs with the IPv4 Internet backbone. Teredo is outlined in RFC 4380 and allows IPv6 local area networks to tunnel over IPv4 networks, by encapsulating IPv6 packets within UDP. The Teredo relay is an IPv6 router that mediates between a Teredo server and the native IPv6 network. It was expected that 6to4 and Teredo would be widely deployed until ISP networks would switch to native IPv6, but by 2014 Google Statistics showed that the use of both mechanisms had dropped to almost 0.", "title": "Transition mechanisms" }, { "paragraph_id": 60, "text": "Hybrid dual-stack IPv6/IPv4 implementations recognize a special class of addresses, the IPv4-mapped IPv6 addresses. These addresses are typically written with a 96-bit prefix in the standard IPv6 format, and the remaining 32 bits are written in the customary dot-decimal notation of IPv4.", "title": "Transition mechanisms" }, { "paragraph_id": 61, "text": "Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called \"IPv4-compatible IPv6 address\", was ::192.0.2.128; however, this method is deprecated.", "title": "Transition mechanisms" }, { "paragraph_id": 62, "text": "Because of the significant internal differences between IPv4 and IPv6 protocol stacks, some of the lower-level functionality available to programmers in the IPv6 stack does not work the same when used with IPv4-mapped addresses. Some common IPv6 stacks do not implement the IPv4-mapped address feature, either because the IPv6 and IPv4 stacks are separate implementations (e.g., Microsoft Windows 2000, XP, and Server 2003), or because of security concerns (OpenBSD). On these operating systems, a program must open a separate socket for each IP protocol it uses. On some systems, e.g., the Linux kernel, NetBSD, and FreeBSD, this feature is controlled by the socket option IPV6_V6ONLY.", "title": "Transition mechanisms" }, { "paragraph_id": 63, "text": "The address prefix 64:ff9b::/96 is a class of IPv4-embedded IPv6 addresses for use in NAT64 transition methods. For example, 64:ff9b::192.0.2.128 represents the IPv4 address 192.0.2.128.", "title": "Transition mechanisms" }, { "paragraph_id": 64, "text": "A number of security implications may arise from the use of IPv6. Some of them may be related with the IPv6 protocols themselves, while others may be related with implementation flaws.", "title": "Security" }, { "paragraph_id": 65, "text": "The addition of nodes having IPv6 enabled by default by the software manufacturer may result in the inadvertent creation of shadow networks, causing IPv6 traffic flowing into networks having only IPv4 security management in place. This may also occur with operating system upgrades, when the newer operating system enables IPv6 by default, while the older one did not. Failing to update the security infrastructure to accommodate IPv6 can lead to IPv6 traffic bypassing it. Shadow networks have occurred on business networks in which enterprises are replacing Windows XP systems that do not have an IPv6 stack enabled by default, with Windows 7 systems, that do. Some IPv6 stack implementors have therefore recommended disabling IPv4 mapped addresses and instead using a dual-stack network where supporting both IPv4 and IPv6 is necessary.", "title": "Security" }, { "paragraph_id": 66, "text": "Research has shown that the use of fragmentation can be leveraged to evade network security controls, similar to IPv4. As a result, RFC 7112 requires that the first fragment of an IPv6 packet contains the entire IPv6 header chain, such that some very pathological fragmentation cases are forbidden. Additionally, as a result of research on the evasion of RA-Guard in RFC 7113, RFC 6980 has deprecated the use of fragmentation with Neighbor Discovery, and discouraged the use of fragmentation with Secure Neighbor Discovery (SEND).", "title": "Security" }, { "paragraph_id": 67, "text": "Due to the anticipated global growth of the Internet, the Internet Engineering Task Force (IETF) in the early 1990s started an effort to develop a next generation IP protocol. By the beginning of 1992, several proposals appeared for an expanded Internet addressing system and by the end of 1992 the IETF announced a call for white papers. In September 1993, the IETF created a temporary, ad hoc IP Next Generation (IPng) area to deal specifically with such issues. The new area was led by Allison Mankin and Scott Bradner, and had a directorate with 15 engineers from diverse backgrounds for direction-setting and preliminary document review: The working-group members were J. Allard (Microsoft), Steve Bellovin (AT&T), Jim Bound (Digital Equipment Corporation), Ross Callon (Wellfleet), Brian Carpenter (CERN), Dave Clark (MIT), John Curran (NEARNET), Steve Deering (Xerox), Dino Farinacci (Cisco), Paul Francis (NTT), Eric Fleischmann (Boeing), Mark Knopper (Ameritech), Greg Minshall (Novell), Rob Ullmann (Lotus), and Lixia Zhang (Xerox).", "title": "Standardization through RFCs" }, { "paragraph_id": 68, "text": "The Internet Engineering Task Force adopted the IPng model on 25 July 1994, with the formation of several IPng working groups. By 1996, a series of RFCs was released defining Internet Protocol version 6 (IPv6), starting with RFC 1883. (Version 5 was used by the experimental Internet Stream Protocol.)", "title": "Standardization through RFCs" }, { "paragraph_id": 69, "text": "The first RFC to standardize IPv6 was the RFC 1883 in 1995, which became obsoleted by RFC 2460 in 1998. In July 2017 this RFC was superseded by RFC 8200, which elevated IPv6 to \"Internet Standard\" (the highest maturity level for IETF protocols).", "title": "Standardization through RFCs" }, { "paragraph_id": 70, "text": "The 1993 introduction of Classless Inter-Domain Routing (CIDR) in the routing and IP address allocation for the Internet, and the extensive use of network address translation (NAT), delayed IPv4 address exhaustion to allow for IPv6 deployment, which began in the mid-2000s.", "title": "Deployment" }, { "paragraph_id": 71, "text": "Universities were among the early adopters of IPv6. Virginia Tech deployed IPv6 at a trial location in 2004 and later expanded IPv6 deployment across the campus network. By 2016, 82% of the traffic on their network used IPv6. Imperial College London began experimental IPv6 deployment in 2003 and by 2016 the IPv6 traffic on their networks averaged between 20% and 40%. A significant portion of this IPv6 traffic was generated through their high energy physics collaboration with CERN, which relies entirely on IPv6.", "title": "Deployment" }, { "paragraph_id": 72, "text": "The Domain Name System (DNS) has supported IPv6 since 2008. In the same year, IPv6 was first used in a major world event during the Beijing 2008 Summer Olympics.", "title": "Deployment" }, { "paragraph_id": 73, "text": "By 2011, all major operating systems in use on personal computers and server systems had production-quality IPv6 implementations. Cellular telephone systems presented a large deployment field for Internet Protocol devices as mobile telephone service made the transition from 3G to 4G technologies, in which voice is provisioned as a voice over IP (VoIP) service that would leverage IPv6 enhancements. In 2009, the US cellular operator Verizon released technical specifications for devices to operate on its \"next-generation\" networks. The specification mandated IPv6 operation according to the 3GPP Release 8 Specifications (March 2009), and deprecated IPv4 as an optional capability.", "title": "Deployment" }, { "paragraph_id": 74, "text": "The deployment of IPv6 in the Internet backbone continued. In 2018 only 25.3% of the about 54,000 autonomous systems advertised both IPv4 and IPv6 prefixes in the global Border Gateway Protocol (BGP) routing database. A further 243 networks advertised only an IPv6 prefix. Internet backbone transit networks offering IPv6 support existed in every country globally, except in parts of Africa, the Middle East and China. By mid-2018 some major European broadband ISPs had deployed IPv6 for the majority of their customers. Sky UK provided over 86% of its customers with IPv6, Deutsche Telekom had 56% deployment of IPv6, XS4ALL in the Netherlands had 73% deployment and in Belgium the broadband ISPs VOO and Telenet had 73% and 63% IPv6 deployment respectively. In the United States the broadband ISP Xfinity had an IPv6 deployment of about 66%. In 2018 Xfinity reported an estimated 36.1 million IPv6 users, while AT&T reported 22.3 million IPv6 users.", "title": "Deployment" } ]
Internet Protocol version 6 (IPv6) is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion, and was intended to replace IPv4. In December 1998, IPv6 became a Draft Standard for the IETF, which subsequently ratified it as an Internet Standard on 14 July 2017. Devices on the Internet are assigned a unique IP address for identification and location definition. With the rapid growth of the Internet after commercialization in the 1990s, it became evident that far more addresses would be needed to connect devices than the IPv4 address space had available. By 1998, the IETF had formalized the successor protocol. IPv6 uses 128-bit addresses, theoretically allowing 2128, or approximately 3.4×1038 total addresses. The actual number is slightly smaller, as multiple ranges are reserved for special usage or completely excluded from general use. The two protocols are not designed to be interoperable, and thus direct communication between them is impossible, complicating the move to IPv6. However, several transition mechanisms have been devised to rectify this. IPv6 provides other technical benefits in addition to a larger addressing space. In particular, it permits hierarchical address allocation methods that facilitate route aggregation across the Internet, and thus limit the expansion of routing tables. The use of multicast addressing is expanded and simplified, and provides additional optimization for the delivery of services. Device mobility, security, and configuration aspects have been considered in the design of the protocol. IPv6 addresses are represented as eight groups of four hexadecimal digits each, separated by colons. The full representation may be shortened; for example, 2001:0db8:0000:0000:0000:8a2e:0370:7334 becomes 2001:db8::8a2e:370:7334.
2001-12-04T16:18:48Z
2023-12-21T09:56:24Z
[ "Template:Infobox networking protocol", "Template:Cite journal", "Template:Cite web", "Template:Rp", "Template:Reflist", "Template:Cite book", "Template:Cite IETF", "Template:Dead link", "Template:Wikiversity", "Template:IETF RFC", "Template:Citation needed", "Template:Citation", "Template:Cite video", "Template:IPaddr", "Template:Portal", "Template:Authority control", "Template:See also", "Template:Cite press release", "Template:Gaps", "Template:Internet protocol suite", "Template:Main", "Template:Toc level", "Template:Short description", "Template:Use dmy dates", "Template:By whom?", "Template:Cbignore", "Template:Cite news", "Template:Man", "Template:Wiktionary", "Template:IPv6", "Template:Update", "Template:Val" ]
https://en.wikipedia.org/wiki/IPv6
15,319
Inca Empire
The Inca Empire (also known as the Incan Empire and the Inka Empire), called Tawantinsuyu by its subjects (Quechua for the "Realm of the Four Parts"), was the largest empire in pre-Columbian America. The administrative, political, and military center of the empire was in the city of Cusco. The Inca civilization rose from the Peruvian highlands sometime in the early 13th century. The Spanish began the conquest of the Inca Empire in 1532 and by 1572, the last Inca state was fully conquered. From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined modern-day Peru, what are now western Ecuador, western and south central Bolivia, northwest Argentina, the southwesternmost tip of Colombia and a large portion of modern-day Chile into a state comparable to the historical empires of Eurasia. Its official language was Quechua. The Inca Empire was unique in that it lacked many of the features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that the Incas were able to construct "one of the greatest imperial states in human history" without the use of the wheel, draft animals, knowledge of iron or steel, or even a system of writing. Notable features of the Inca Empire included its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations and production in a difficult environment, and the organization and management fostered or imposed on its people and their labor. The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects. Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama. The Incas considered their king, the Sapa Inca, to be the "son of the sun". The Incan economy is a subject of scholarly debate. Darrell E. La Lone, in his work The Inca as a Nonmarket Economy, noted that scholars have described it as "feudal, slave, [or] socialist," as well as "a system based on reciprocity and redistribution; a system with markets and commerce; or an Asiatic mode of production." The Inca referred to their empire as Tawantinsuyu, "the four suyu". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu ("regions" or "provinces") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu. While the term Inka nowaydays is translated as "ruler" or "lord" in Quechua, this term does not simply refer to the "King" of the Tawantinsuyu or Sapa Inka but also to the Inca nobles, and some theorize its meaning could be broader. In that sense, the Inca nobles were a small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people. When the Spanish arrived to the Empire of the Incas they gave the name "Peru" to what the natives knew as Tawantinsuyu. The name "Inca Empire" (Imperio de los Incas) originated from the Chronicles of the 16th Century. The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization is one of at least five civilizations in the world deemed by scholars to be "pristine." The concept of a "pristine" civilization refers to a civilization that has developed independently of external influences and is not a derivative of other civilizations. The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca, and the Wari or Huari (c. 600–1100 AD), centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures. To those earlier civilizations may be owed some of the accomplishments cited for the Inca Empire: "thousands of miles of roads and dozens of large administrative centers with elaborate stone construction...terraced mountainsides and filled in valleys", and the production of "vast quantities of goods". Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño has been questioned, as other crops such as maize can also be dried with only sunlight. Troll also argued that llamas, the Incas' pack animal, can be found in their largest numbers in this very same region. The maximum extent of the Inca Empire roughly coincided with the distribution of llamas and alpacas, the only large domesticated animals in Pre-Hispanic America. As a third point Troll pointed out irrigation technology as advantageous to Inca state-building. While Troll theorized concerning environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization. The Inca people were a pastoral tribe in the Cusco area around the 12th century. Indigenous Peruvian oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu ("principal niche", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco). Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans. Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him. Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained. Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away. After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca. Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name meant "earth-shaker". The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control. Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE). Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station. Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects. Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire. Pachacuti had named his favorite son, Amaru Yupanqui, as his co-ruler and successor. However, as co-ruler Amaru showed little interest in military affairs. Due to this lack of military talent, he faced much opposition from the Inca nobility, who began to plot against him. Despite this, Pachacuti decided to make a blind eye concerning the capabilities of his son. Nevertheless, following a revolt during which Amaru almost led the Inca forces to defeat, the Sapa Inca decided to replace the co-ruler with another one of his sons, Túpac Inca Yupanqui. Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into what are today Ecuador and Colombia. Túpac Inca's son Huayna Cápac added a small portion of land to the north in what is today Ecuador. At its height, the Inca Empire included modern-day Peru, what are today western and south central Bolivia, southwest Ecuador and Colombia and a large portion of modern-day Chile, at the north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche. This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule. Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire. Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93). Instead, he places it in 1532 during the Inca Civil War. Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century. At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo. The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527. The empire extended into corners of what are today the north of Argentina and part of the southern Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano. The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation: For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute. Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526. It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: "In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land". When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America. The first epidemic of European disease in the Inca Empire was probably in the 1520s, killing Huayna Capac, his designated heir, and an unknown, probably large, number of other Incan subjects. The forces led by Pizarro consisted of 168 men, along with one cannon and 27 horses. The conquistadors were armed with lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in Peru, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, who later would strategically defeat the Spanish as they expanded further south. The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas). Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the "Requerimiento" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate. Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment, Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533. Although "defeat" often implies an unwanted loss in battle, many of the diverse ethnic groups ruled by the Inca "welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners". Many regional leaders, called Kurakas, continued to serve the Spanish overlords, called encomenderos, as they had served the Inca overlords. Other than efforts to spread the religion of Christianity, the Spanish benefited from and made little effort to change the society and culture of the former Inca Empire until the rule of Francisco de Toledo as viceroy from 1569 to 1581. The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed. This ended resistance to the Spanish conquest under the political authority of the Inca state. After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture. Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement. Although smallpox is usually presumed to have spread through the Empire before the arrival of the Spaniards, the devastation is also consistent with other theories. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic. Other diseases, including a probable typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people. There would be periodic attempts by indigenous leaders to expel the Spanish colonists and re-create the Inca Empire until the late 18th century. See Juan Santos Atahualpa and Túpac Amaru II. The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards. The empire was linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, Barbacoan languages, and Cañari–Puruhá as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records. In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of what is now Lima as the Qhapaq Runasimi ("great language of the people"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi ("royal language"), which is thought to have been closely related to or a dialect of Puquina. There are several common misconceptions about the history of Quechua, as it is frequently identified as the "Inca language". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. However, the Incas left a linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language. The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus). These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear. The Incas also kept records by using quipus. The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term 'wawa' when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a "coming of age" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of "ignorance". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of "ignorance, inexperience, and lack of reason, a condition that the child would overcome with time". For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender. The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This "folly" stage of life was the time young adults were allowed to have sex without being a parent. Between the ages of 20 and 30, people were considered young adults, "ripe for serious thought and labor". Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married. At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the "decrepitude" stage signifies the loss of mental well-being and further physical decline. In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16. Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife. Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock. Girls and mothers would also work around the house to keep it orderly to please the public inspectors. These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy. It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it would not work out or if the woman wanted to return to her parents' home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together. Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks. According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The "indigenous cultures saw the two genders as complementary parts of a whole". In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women came to be known as weavers, although there is significant evidence to suggest that this gender role did not appear until colonizing Spaniards realized women's productive talents in this sphere and used it to their economic advantage. There is evidence to suggest that both men and women contributed equally to the weaving tasks in pre-Hispanic Andean culture. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water. Men on the other hand, "weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary". This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields. Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family. Kinship within the Inca society followed a parallel line of descent. In other words, women descended from women and men descended from men. Due to the parallel descent, a woman had access to land and other assets through her mother. Due to the dry climate that extends from modern-day Peru to what is now Chile's Norte Grande, mummification occurred naturally by desiccation. It is believed that the ancient Incas learned to mummify their dead to show reverence to their leaders and representatives. Mummification was chosen to preserve the body and to give others the opportunity to worship them in their death. The ancient Inca believed in reincarnation, so preservation of the body was vital for passage into the afterlife. Since mummification was reserved for royalty, this entailed preserving power by placing the deceased's valuables with the body in places of honor. The bodies remained accessible for ceremonies where they would be removed and celebrated with. The ancient Inca mummified their dead with various tools. Chicha corn beer was used to delay decomposition and the effects of bacterial activity on the body. The bodies were then stuffed with natural materials such as vegetable matter and animal hair. Sticks were used to maintain their shape and poses. In addition to the mummification process, the Inca would bury their dead in the fetal position inside a vessel intended to mimic the womb for preparation of their new birth. A ceremony would be held that included music, food, and drink for the relatives and loved ones of the deceased. Inca myths were transmitted orally until early Spanish colonists recorded them; however, some scholars claim that they were recorded on quipus, Andean knotted string records. The Inca believed in reincarnation. After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains. It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. The Inca nobility practiced cranial deformation. They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes. The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527. The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha. The Incas were polytheists who worshipped many gods. These included: The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class, most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations, though barter (or trueque) was present in some areas. In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity, and occasional feasts hosted by Inca officials for their subjects. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources and the cultural foundation of ayni, or reciprocal exchange. The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was "son of the sun", and his people the intip churin, or "children of the sun", and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or "Sunfest" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines as the geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it "the navel of the universe". The Inca Empire was a decentralized government consisting of a central government with the Inca at its head and four regional quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes. Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast. The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of what are now Ecuador and Colombia. The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed what is now the Bolivian Altiplano and much of the southern Andes, reaching what is now Argentina and as far south as the Maipo or Maule river in modern Central Chile. Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile. The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word "Andes". Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco. The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. "he who sees all"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence. The Inca had three moral precepts that governed their behavior: Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the "priest who recounts", the High Priest of the Sun. However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister. Starting with Topa Inca Yupanqui, a "Council of the Realm" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna). While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units. We can assure your majesty that it is so beautiful and has such fine buildings that it would even be remarkable in Spain. Francisco Pizarro Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them. This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in what is now Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity. Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance. Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals. Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task. The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers. These numbers were stored in base-10 digits, the same base used by the Quechua language and in administrative and military units. These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus. Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions. According to mid-17th-century Jesuit chronicler Bernabé Cobo, the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin) revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the "farm account books" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books. Tunics were created by skilled Incan textile-makers as a piece of warm clothing, but they also symbolized cultural and political status and power. Cumbi was the fine, tapestry-woven woolen cloth that was produced and necessary for the creation of tunics. Cumbi was produced by specially-appointed women and men. Generally, textile-making was practiced by both men and women. As emphasized by certain historians, only with European conquest was it deemed that women would become the primary weavers in society, as opposed to Incan society where specialty textiles were produced by men and women equally. Complex patterns and designs were meant to convey information about order in Andean society as well as the Universe. Tunics could also symbolize one's relationship to ancient rulers or important ancestors. These textiles were frequently designed to represent the physical order of a society, for example, the flow of tribute within an empire. Many tunics have a "checkerboard effect" which is known as the collcapata. According to historians Kenneth Mills, William B. Taylor, and Sandra Lauderdale Graham, the collcapata patterns "seem to have expressed concepts of commonality, and, ultimately, unity of all ranks of people, representing a careful kind of foundation upon which the structure of Inkaic universalism was built." Rulers wore various tunics throughout the year, switching them out for different occasions and feasts. The symbols present within the tunics suggest the importance of "pictographic expression" within Inkan and other Andean societies far before the iconographies of the Spanish Christians. Uncu was a men's garment similar to a tunic. It was an upper-body garment of knee-length; Royals wore it with a mantle cloth called ''yacolla.'' Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or "aryballos". Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History. Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain. The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature. The Inca made many discoveries in medicine. They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times. The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes. The Spaniards took advantage of the effects of chewing coca leaves. The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries. The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war. The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms. Inca weaponry included "hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze". Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain. Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone. Armor included: Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled. Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire. Francisco López de Jerez wrote in 1534: ... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.) Chronicler Bernabé Cobo wrote: The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures. (... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)-Bernabé Cobo, Historia del Nuevo Mundo (1653) Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags. In his 1847 book A History of the Conquest of Peru, "William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas." A 1917 world flags book says the Inca "heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns." In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, "I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it". Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century, and even the Congress of the Republic of Peru has determined that the flag is a fake by citing the conclusion of the National Academy of Peruvian History: "The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context". National Academy of Peruvian History The people of the Andes, including the Incas, were able to adapt to high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds. Compared to other humans, the Andeans had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been taller, the Inca had the advantage of coping with the extraordinary altitude. The Tibetans in Asia living in the Himalayas are also adapted to living in high-altitudes, although the adaptation is different from that of the Andeans.
[ { "paragraph_id": 0, "text": "The Inca Empire (also known as the Incan Empire and the Inka Empire), called Tawantinsuyu by its subjects (Quechua for the \"Realm of the Four Parts\"), was the largest empire in pre-Columbian America. The administrative, political, and military center of the empire was in the city of Cusco. The Inca civilization rose from the Peruvian highlands sometime in the early 13th century. The Spanish began the conquest of the Inca Empire in 1532 and by 1572, the last Inca state was fully conquered.", "title": "" }, { "paragraph_id": 1, "text": "From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined modern-day Peru, what are now western Ecuador, western and south central Bolivia, northwest Argentina, the southwesternmost tip of Colombia and a large portion of modern-day Chile into a state comparable to the historical empires of Eurasia. Its official language was Quechua.", "title": "" }, { "paragraph_id": 2, "text": "The Inca Empire was unique in that it lacked many of the features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that the Incas were able to construct \"one of the greatest imperial states in human history\" without the use of the wheel, draft animals, knowledge of iron or steel, or even a system of writing. Notable features of the Inca Empire included its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations and production in a difficult environment, and the organization and management fostered or imposed on its people and their labor.", "title": "" }, { "paragraph_id": 3, "text": "The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. \"Taxes\" consisted of a labour obligation of a person to the Empire. The Inca rulers (who theoretically owned all the means of production) reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects.", "title": "" }, { "paragraph_id": 4, "text": "Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama. The Incas considered their king, the Sapa Inca, to be the \"son of the sun\".", "title": "" }, { "paragraph_id": 5, "text": "The Incan economy is a subject of scholarly debate. Darrell E. La Lone, in his work The Inca as a Nonmarket Economy, noted that scholars have described it as \"feudal, slave, [or] socialist,\" as well as \"a system based on reciprocity and redistribution; a system with markets and commerce; or an Asiatic mode of production.\"", "title": "" }, { "paragraph_id": 6, "text": "The Inca referred to their empire as Tawantinsuyu, \"the four suyu\". In Quechua, tawa is four and -ntin is a suffix naming a group, so that a tawantin is a quartet, a group of four things taken together, in this case the four suyu (\"regions\" or \"provinces\") whose corners met at the capital. The four suyu were: Chinchaysuyu (north), Antisuyu (east; the Amazon jungle), Qullasuyu (south) and Kuntisuyu (west). The name Tawantinsuyu was, therefore, a descriptive term indicating a union of provinces. The Spanish transliterated the name as Tahuatinsuyo or Tahuatinsuyu.", "title": "Etymology" }, { "paragraph_id": 7, "text": "While the term Inka nowaydays is translated as \"ruler\" or \"lord\" in Quechua, this term does not simply refer to the \"King\" of the Tawantinsuyu or Sapa Inka but also to the Inca nobles, and some theorize its meaning could be broader. In that sense, the Inca nobles were a small percentage of the total population of the empire, probably numbering only 15,000 to 40,000, but ruling a population of around 10 million people.", "title": "Etymology" }, { "paragraph_id": 8, "text": "When the Spanish arrived to the Empire of the Incas they gave the name \"Peru\" to what the natives knew as Tawantinsuyu. The name \"Inca Empire\" (Imperio de los Incas) originated from the Chronicles of the 16th Century.", "title": "Etymology" }, { "paragraph_id": 9, "text": "The Inca Empire was the last chapter of thousands of years of Andean civilizations. The Andean civilization is one of at least five civilizations in the world deemed by scholars to be \"pristine.\" The concept of a \"pristine\" civilization refers to a civilization that has developed independently of external influences and is not a derivative of other civilizations.", "title": "History" }, { "paragraph_id": 10, "text": "The Inca Empire was preceded by two large-scale empires in the Andes: the Tiwanaku (c. 300–1100 AD), based around Lake Titicaca, and the Wari or Huari (c. 600–1100 AD), centered near the city of Ayacucho. The Wari occupied the Cuzco area for about 400 years. Thus, many of the characteristics of the Inca Empire derived from earlier multi-ethnic and expansive Andean cultures. To those earlier civilizations may be owed some of the accomplishments cited for the Inca Empire: \"thousands of miles of roads and dozens of large administrative centers with elaborate stone construction...terraced mountainsides and filled in valleys\", and the production of \"vast quantities of goods\".", "title": "History" }, { "paragraph_id": 11, "text": "Carl Troll has argued that the development of the Inca state in the central Andes was aided by conditions that allow for the elaboration of the staple food chuño. Chuño, which can be stored for long periods, is made of potato dried at the freezing temperatures that are common at nighttime in the southern Peruvian highlands. Such a link between the Inca state and chuño has been questioned, as other crops such as maize can also be dried with only sunlight.", "title": "History" }, { "paragraph_id": 12, "text": "Troll also argued that llamas, the Incas' pack animal, can be found in their largest numbers in this very same region. The maximum extent of the Inca Empire roughly coincided with the distribution of llamas and alpacas, the only large domesticated animals in Pre-Hispanic America.", "title": "History" }, { "paragraph_id": 13, "text": "As a third point Troll pointed out irrigation technology as advantageous to Inca state-building. While Troll theorized concerning environmental influences on the Inca Empire, he opposed environmental determinism, arguing that culture lay at the core of the Inca civilization.", "title": "History" }, { "paragraph_id": 14, "text": "The Inca people were a pastoral tribe in the Cusco area around the 12th century. Indigenous Peruvian oral history tells an origin story of three caves. The center cave at Tampu T'uqu (Tambo Tocco) was named Qhapaq T'uqu (\"principal niche\", also spelled Capac Tocco). The other caves were Maras T'uqu (Maras Tocco) and Sutiq T'uqu (Sutic Tocco). Four brothers and four sisters stepped out of the middle cave. They were: Ayar Manco, Ayar Cachi, Ayar Awqa (Ayar Auca) and Ayar Uchu; and Mama Ocllo, Mama Raua, Mama Huaco and Mama Qura (Mama Cora). Out of the side caves came the people who were to be the ancestors of all the Inca clans.", "title": "History" }, { "paragraph_id": 15, "text": "Ayar Manco carried a magic staff made of the finest gold. Where this staff landed, the people would live. They traveled for a long time. On the way, Ayar Cachi boasted about his strength and power. His siblings tricked him into returning to the cave to get a sacred llama. When he went into the cave, they trapped him inside to get rid of him.", "title": "History" }, { "paragraph_id": 16, "text": "Ayar Uchu decided to stay on the top of the cave to look over the Inca people. The minute he proclaimed that, he turned to stone. They built a shrine around the stone and it became a sacred object. Ayar Auca grew tired of all this and decided to travel alone. Only Ayar Manco and his four sisters remained.", "title": "History" }, { "paragraph_id": 17, "text": "Finally, they reached Cusco. The staff sank into the ground. Before they arrived, Mama Ocllo had already borne Ayar Manco a child, Sinchi Roca. The people who were already living in Cusco fought hard to keep their land, but Mama Huaca was a good fighter. When the enemy attacked, she threw her bolas (several stones tied together that spun through the air when thrown) at a soldier (gualla) and killed him instantly. The other people became afraid and ran away.", "title": "History" }, { "paragraph_id": 18, "text": "After that, Ayar Manco became known as Manco Cápac, the founder of the Inca. It is said that he and his sisters built the first Inca homes in the valley with their own hands. When the time came, Manco Cápac turned to stone like his brothers before him. His son, Sinchi Roca, became the second emperor of the Inca.", "title": "History" }, { "paragraph_id": 19, "text": "Under the leadership of Manco Cápac, the Inca formed the small city-state Kingdom of Cusco (Quechua Qusqu', Qosqo). In 1438, they began a far-reaching expansion under the command of Sapa Inca (paramount leader) Pachacuti-Cusi Yupanqui, whose name meant \"earth-shaker\". The name of Pachacuti was given to him after he conquered the Tribe of Chancas (modern Apurímac). During his reign, he and his son Tupac Yupanqui brought much of the modern-day territory of Peru under Inca control.", "title": "History" }, { "paragraph_id": 20, "text": "Pachacuti reorganized the kingdom of Cusco into the Tahuantinsuyu, which consisted of a central government with the Inca at its head and four provincial governments with strong leaders: Chinchasuyu (NW), Antisuyu (NE), Kuntisuyu (SW) and Qullasuyu (SE). Pachacuti is thought to have built Machu Picchu, either as a family home or summer retreat, although it may have been an agricultural station.", "title": "History" }, { "paragraph_id": 21, "text": "Pachacuti sent spies to regions he wanted in his empire and they brought to him reports on political organization, military strength and wealth. He then sent messages to their leaders extolling the benefits of joining his empire, offering them presents of luxury goods such as high quality textiles and promising that they would be materially richer as his subjects.", "title": "History" }, { "paragraph_id": 22, "text": "Most accepted the rule of the Inca as a fait accompli and acquiesced peacefully. Refusal to accept Inca rule resulted in military conquest. Following conquest the local rulers were executed. The ruler's children were brought to Cusco to learn about Inca administration systems, then return to rule their native lands. This allowed the Inca to indoctrinate them into the Inca nobility and, with luck, marry their daughters into families at various corners of the empire.", "title": "History" }, { "paragraph_id": 23, "text": "Pachacuti had named his favorite son, Amaru Yupanqui, as his co-ruler and successor. However, as co-ruler Amaru showed little interest in military affairs. Due to this lack of military talent, he faced much opposition from the Inca nobility, who began to plot against him. Despite this, Pachacuti decided to make a blind eye concerning the capabilities of his son. Nevertheless, following a revolt during which Amaru almost led the Inca forces to defeat, the Sapa Inca decided to replace the co-ruler with another one of his sons, Túpac Inca Yupanqui. Túpac Inca Yupanqui began conquests to the north in 1463 and continued them as Inca ruler after Pachacuti's death in 1471. Túpac Inca's most important conquest was the Kingdom of Chimor, the Inca's only serious rival for the Peruvian coast. Túpac Inca's empire then stretched north into what are today Ecuador and Colombia.", "title": "History" }, { "paragraph_id": 24, "text": "Túpac Inca's son Huayna Cápac added a small portion of land to the north in what is today Ecuador. At its height, the Inca Empire included modern-day Peru, what are today western and south central Bolivia, southwest Ecuador and Colombia and a large portion of modern-day Chile, at the north of the Maule River. Traditional historiography claims the advance south halted after the Battle of the Maule where they met determined resistance from the Mapuche.", "title": "History" }, { "paragraph_id": 25, "text": "This view is challenged by historian Osvaldo Silva who argues instead that it was the social and political framework of the Mapuche that posed the main difficulty in imposing imperial rule. Silva does accept that the battle of the Maule was a stalemate, but argues the Incas lacked incentives for conquest they had had when fighting more complex societies such as the Chimú Empire.", "title": "History" }, { "paragraph_id": 26, "text": "Silva also disputes the date given by traditional historiography for the battle: the late 15th century during the reign of Topa Inca Yupanqui (1471–93). Instead, he places it in 1532 during the Inca Civil War. Nevertheless, Silva agrees on the claim that the bulk of the Incan conquests were made during the late 15th century. At the time of the Incan Civil War an Inca army was, according to Diego de Rosales, subduing a revolt among the Diaguitas of Copiapó and Coquimbo.", "title": "History" }, { "paragraph_id": 27, "text": "The empire's push into the Amazon Basin near the Chinchipe River was stopped by the Shuar in 1527. The empire extended into corners of what are today the north of Argentina and part of the southern Colombia. However, most of the southern portion of the Inca empire, the portion denominated as Qullasuyu, was located in the Altiplano.", "title": "History" }, { "paragraph_id": 28, "text": "The Inca Empire was an amalgamation of languages, cultures and peoples. The components of the empire were not all uniformly loyal, nor were the local cultures all fully integrated. The Inca empire as a whole had an economy based on exchange and taxation of luxury goods and labour. The following quote describes a method of taxation:", "title": "History" }, { "paragraph_id": 29, "text": "For as is well known to all, not a single village of the highlands or the plains failed to pay the tribute levied on it by those who were in charge of these matters. There were even provinces where, when the natives alleged that they were unable to pay their tribute, the Inca ordered that each inhabitant should be obliged to turn in every four months a large quill full of live lice, which was the Inca's way of teaching and accustoming them to pay tribute.", "title": "History" }, { "paragraph_id": 30, "text": "Spanish conquistadors led by Francisco Pizarro and his brothers explored south from what is today Panama, reaching Inca territory by 1526. It was clear that they had reached a wealthy land with prospects of great treasure, and after another expedition in 1529 Pizarro traveled to Spain and received royal approval to conquer the region and be its viceroy. This approval was received as detailed in the following quote: \"In July 1529 the Queen of Spain signed a charter allowing Pizarro to conquer the Incas. Pizarro was named governor and captain of all conquests in Peru, or New Castile, as the Spanish now called the land\".", "title": "History" }, { "paragraph_id": 31, "text": "When the conquistadors returned to Peru in 1532, a war of succession between the sons of Sapa Inca Huayna Capac, Huáscar and Atahualpa, and unrest among newly conquered territories weakened the empire. Perhaps more importantly, smallpox, influenza, typhus and measles had spread from Central America. The first epidemic of European disease in the Inca Empire was probably in the 1520s, killing Huayna Capac, his designated heir, and an unknown, probably large, number of other Incan subjects.", "title": "History" }, { "paragraph_id": 32, "text": "The forces led by Pizarro consisted of 168 men, along with one cannon and 27 horses. The conquistadors were armed with lances, arquebuses, steel armor and long swords. In contrast, the Inca used weapons made out of wood, stone, copper and bronze, while using an Alpaca fiber based armor, putting them at significant technological disadvantage—none of their weapons could pierce the Spanish steel armor. In addition, due to the absence of horses in Peru, the Inca did not develop tactics to fight cavalry. However, the Inca were still effective warriors, being able to successfully fight the Mapuche, who later would strategically defeat the Spanish as they expanded further south.", "title": "History" }, { "paragraph_id": 33, "text": "The first engagement between the Inca and the Spanish was the Battle of Puná, near present-day Guayaquil, Ecuador, on the Pacific Coast; Pizarro then founded the city of Piura in July 1532. Hernando de Soto was sent inland to explore the interior and returned with an invitation to meet the Inca, Atahualpa, who had defeated his brother in the civil war and was resting at Cajamarca with his army of 80,000 troops, that were at the moment armed only with hunting tools (knives and lassos for hunting llamas).", "title": "History" }, { "paragraph_id": 34, "text": "Pizarro and some of his men, most notably a friar named Vincente de Valverde, met with the Inca, who had brought only a small retinue. The Inca offered them ceremonial chicha in a golden cup, which the Spanish rejected. The Spanish interpreter, Friar Vincente, read the \"Requerimiento\" that demanded that he and his empire accept the rule of King Charles I of Spain and convert to Christianity. Atahualpa dismissed the message and asked them to leave. After this, the Spanish began their attack against the mostly unarmed Inca, captured Atahualpa as hostage, and forced the Inca to collaborate.", "title": "History" }, { "paragraph_id": 35, "text": "Atahualpa offered the Spaniards enough gold to fill the room he was imprisoned in and twice that amount of silver. The Inca fulfilled this ransom, but Pizarro deceived them, refusing to release the Inca afterwards. During Atahualpa's imprisonment, Huáscar was assassinated elsewhere. The Spaniards maintained that this was at Atahualpa's orders; this was used as one of the charges against Atahualpa when the Spaniards finally executed him, in August 1533.", "title": "History" }, { "paragraph_id": 36, "text": "Although \"defeat\" often implies an unwanted loss in battle, many of the diverse ethnic groups ruled by the Inca \"welcomed the Spanish invaders as liberators and willingly settled down with them to share rule of Andean farmers and miners\". Many regional leaders, called Kurakas, continued to serve the Spanish overlords, called encomenderos, as they had served the Inca overlords. Other than efforts to spread the religion of Christianity, the Spanish benefited from and made little effort to change the society and culture of the former Inca Empire until the rule of Francisco de Toledo as viceroy from 1569 to 1581.", "title": "History" }, { "paragraph_id": 37, "text": "The Spanish installed Atahualpa's brother Manco Inca Yupanqui in power; for some time Manco cooperated with the Spanish while they fought to put down resistance in the north. Meanwhile, an associate of Pizarro, Diego de Almagro, attempted to claim Cusco. Manco tried to use this intra-Spanish feud to his advantage, recapturing Cusco in 1536, but the Spanish retook the city afterwards. Manco Inca then retreated to the mountains of Vilcabamba and established the small Neo-Inca State, where he and his successors ruled for another 36 years, sometimes raiding the Spanish or inciting revolts against them. In 1572 the last Inca stronghold was conquered and the last ruler, Túpac Amaru, Manco's son, was captured and executed. This ended resistance to the Spanish conquest under the political authority of the Inca state.", "title": "History" }, { "paragraph_id": 38, "text": "After the fall of the Inca Empire many aspects of Inca culture were systematically destroyed, including their sophisticated farming system, known as the vertical archipelago model of agriculture. Spanish colonial officials used the Inca mita corvée labor system for colonial aims, sometimes brutally. One member of each family was forced to work in the gold and silver mines, the foremost of which was the titanic silver mine at Potosí. When a family member died, which would usually happen within a year or two, the family was required to send a replacement.", "title": "History" }, { "paragraph_id": 39, "text": "Although smallpox is usually presumed to have spread through the Empire before the arrival of the Spaniards, the devastation is also consistent with other theories. Beginning in Colombia, smallpox spread rapidly before the Spanish invaders first arrived in the empire. The spread was probably aided by the efficient Inca road system. Smallpox was only the first epidemic. Other diseases, including a probable typhus outbreak in 1546, influenza and smallpox together in 1558, smallpox again in 1589, diphtheria in 1614, and measles in 1618, all ravaged the Inca people.", "title": "History" }, { "paragraph_id": 40, "text": "There would be periodic attempts by indigenous leaders to expel the Spanish colonists and re-create the Inca Empire until the late 18th century. See Juan Santos Atahualpa and Túpac Amaru II.", "title": "History" }, { "paragraph_id": 41, "text": "The number of people inhabiting Tawantinsuyu at its peak is uncertain, with estimates ranging from 4–37 million. Most population estimates are in the range of 6 to 14 million. In spite of the fact that the Inca kept excellent census records using their quipus, knowledge of how to read them was lost as almost all fell into disuse and disintegrated over time or were destroyed by the Spaniards.", "title": "Society" }, { "paragraph_id": 42, "text": "The empire was linguistically diverse. Some of the most important languages were Quechua, Aymara, Puquina and Mochica, respectively mainly spoken in the Central Andes, the Altiplano or (Qullasuyu), the south Peruvian coast (Kuntisuyu), and the area of the north Peruvian coast (Chinchaysuyu) around Chan Chan, today Trujillo. Other languages included Quignam, Jaqaru, Leco, Uru-Chipaya languages, Kunza, Humahuaca, Cacán, Mapudungun, Culle, Chachapoya, Catacao languages, Manta, Barbacoan languages, and Cañari–Puruhá as well as numerous Amazonian languages on the frontier regions. The exact linguistic topography of the pre-Columbian and early colonial Andes remains incompletely understood, owing to the extinction of several languages and the loss of historical records.", "title": "Society" }, { "paragraph_id": 43, "text": "In order to manage this diversity, the Inca lords promoted the usage of Quechua, especially the variety of what is now Lima as the Qhapaq Runasimi (\"great language of the people\"), or the official language/lingua franca. Defined by mutual intelligibility, Quechua is actually a family of languages rather than one single language, parallel to the Romance or Slavic languages in Europe. Most communities within the empire, even those resistant to Inca rule, learned to speak a variety of Quechua (forming new regional varieties with distinct phonetics) in order to communicate with the Inca lords and mitma colonists, as well as the wider integrating society, but largely retained their native languages as well. The Incas also had their own ethnic language, referred to as Qhapaq simi (\"royal language\"), which is thought to have been closely related to or a dialect of Puquina.", "title": "Society" }, { "paragraph_id": 44, "text": "There are several common misconceptions about the history of Quechua, as it is frequently identified as the \"Inca language\". Quechua did not originate with the Incas, had been a lingua franca in multiple areas before the Inca expansions, was diverse before the rise of the Incas, and it was not the native or original language of the Incas. However, the Incas left a linguistic legacy, in that they introduced Quechua to many areas where it is still widely spoken today, including Ecuador, southern Bolivia, southern Colombia, and parts of the Amazon basin. The Spanish conquerors continued the official usage of Quechua during the early colonial period, and transformed it into a literary language.", "title": "Society" }, { "paragraph_id": 45, "text": "The Incas were not known to develop a written form of language; however, they visually recorded narratives through paintings on vases and cups (qirus). These paintings are usually accompanied by geometric patterns known as toqapu, which are also found in textiles. Researchers have speculated that toqapu patterns could have served as a form of written communication (e.g.: heraldry, or glyphs), however this remains unclear. The Incas also kept records by using quipus.", "title": "Society" }, { "paragraph_id": 46, "text": "The high infant mortality rates that plagued the Inca Empire caused all newborn infants to be given the term 'wawa' when they were born. Most families did not invest very much into their child until they reached the age of two or three years old. Once the child reached the age of three, a \"coming of age\" ceremony occurred, called the rutuchikuy. For the Incas, this ceremony indicated that the child had entered the stage of \"ignorance\". During this ceremony, the family would invite all relatives to their house for food and dance, and then each member of the family would receive a lock of hair from the child. After each family member had received a lock, the father would shave the child's head. This stage of life was categorized by a stage of \"ignorance, inexperience, and lack of reason, a condition that the child would overcome with time\". For Incan society, in order to advance from the stage of ignorance to development the child must learn the roles associated with their gender.", "title": "Society" }, { "paragraph_id": 47, "text": "The next important ritual was to celebrate the maturity of a child. Unlike the coming of age ceremony, the celebration of maturity signified the child's sexual potency. This celebration of puberty was called warachikuy for boys and qikuchikuy for girls. The warachikuy ceremony included dancing, fasting, tasks to display strength, and family ceremonies. The boy would also be given new clothes and taught how to act as an unmarried man. The qikuchikuy signified the onset of menstruation, upon which the girl would go into the forest alone and return only once the bleeding had ended. In the forest she would fast, and, once returned, the girl would be given a new name, adult clothing, and advice. This \"folly\" stage of life was the time young adults were allowed to have sex without being a parent.", "title": "Society" }, { "paragraph_id": 48, "text": "Between the ages of 20 and 30, people were considered young adults, \"ripe for serious thought and labor\". Young adults were able to retain their youthful status by living at home and assisting in their home community. Young adults only reached full maturity and independence once they had married.", "title": "Society" }, { "paragraph_id": 49, "text": "At the end of life, the terms for men and women denote loss of sexual vitality and humanity. Specifically, the \"decrepitude\" stage signifies the loss of mental well-being and further physical decline.", "title": "Society" }, { "paragraph_id": 50, "text": "In the Incan Empire, the age of marriage differed for men and women: men typically married at the age of 20, while women usually got married about four years earlier at the age of 16. Men who were highly ranked in society could have multiple wives, but those lower in the ranks could only take a single wife. Marriages were typically within classes and resembled a more business-like agreement. Once married, the women were expected to cook, collect food and watch over the children and livestock. Girls and mothers would also work around the house to keep it orderly to please the public inspectors. These duties remained the same even after wives became pregnant and with the added responsibility of praying and making offerings to Kanopa, who was the god of pregnancy. It was typical for marriages to begin on a trial basis with both men and women having a say in the longevity of the marriage. If the man felt that it would not work out or if the woman wanted to return to her parents' home the marriage would end. Once the marriage was final, the only way the two could be divorced was if they did not have a child together. Marriage within the Empire was crucial for survival. A family was considered disadvantaged if there was not a married couple at the center because everyday life centered around the balance of male and female tasks.", "title": "Society" }, { "paragraph_id": 51, "text": "According to some historians, such as Terence N. D'Altroy, male and female roles were considered equal in Inca society. The \"indigenous cultures saw the two genders as complementary parts of a whole\". In other words, there was not a hierarchical structure in the domestic sphere for the Incas. Within the domestic sphere, women came to be known as weavers, although there is significant evidence to suggest that this gender role did not appear until colonizing Spaniards realized women's productive talents in this sphere and used it to their economic advantage. There is evidence to suggest that both men and women contributed equally to the weaving tasks in pre-Hispanic Andean culture. Women's everyday tasks included: spinning, watching the children, weaving cloth, cooking, brewing chichi, preparing fields for cultivation, planting seeds, bearing children, harvesting, weeding, hoeing, herding, and carrying water. Men on the other hand, \"weeded, plowed, participated in combat, helped in the harvest, carried firewood, built houses, herded llama and alpaca, and spun and wove when necessary\". This relationship between the genders may have been complementary. Unsurprisingly, onlooking Spaniards believed women were treated like slaves, because women did not work in Spanish society to the same extent, and certainly did not work in fields. Women were sometimes allowed to own land and herds because inheritance was passed down from both the mother's and father's side of the family. Kinship within the Inca society followed a parallel line of descent. In other words, women descended from women and men descended from men. Due to the parallel descent, a woman had access to land and other assets through her mother.", "title": "Society" }, { "paragraph_id": 52, "text": "Due to the dry climate that extends from modern-day Peru to what is now Chile's Norte Grande, mummification occurred naturally by desiccation. It is believed that the ancient Incas learned to mummify their dead to show reverence to their leaders and representatives. Mummification was chosen to preserve the body and to give others the opportunity to worship them in their death. The ancient Inca believed in reincarnation, so preservation of the body was vital for passage into the afterlife. Since mummification was reserved for royalty, this entailed preserving power by placing the deceased's valuables with the body in places of honor. The bodies remained accessible for ceremonies where they would be removed and celebrated with. The ancient Inca mummified their dead with various tools. Chicha corn beer was used to delay decomposition and the effects of bacterial activity on the body. The bodies were then stuffed with natural materials such as vegetable matter and animal hair. Sticks were used to maintain their shape and poses. In addition to the mummification process, the Inca would bury their dead in the fetal position inside a vessel intended to mimic the womb for preparation of their new birth. A ceremony would be held that included music, food, and drink for the relatives and loved ones of the deceased.", "title": "Society" }, { "paragraph_id": 53, "text": "Inca myths were transmitted orally until early Spanish colonists recorded them; however, some scholars claim that they were recorded on quipus, Andean knotted string records.", "title": "Religion" }, { "paragraph_id": 54, "text": "The Inca believed in reincarnation. After death, the passage to the next world was fraught with difficulties. The spirit of the dead, camaquen, would need to follow a long road and during the trip the assistance of a black dog that could see in the dark was required. Most Incas imagined the after world to be like an earthly paradise with flower-covered fields and snow-capped mountains.", "title": "Religion" }, { "paragraph_id": 55, "text": "It was important to the Inca that they not die as a result of burning or that the body of the deceased not be incinerated. Burning would cause their vital force to disappear and threaten their passage to the after world. The Inca nobility practiced cranial deformation. They wrapped tight cloth straps around the heads of newborns to shape their soft skulls into a more conical form, thus distinguishing the nobility from other social classes.", "title": "Religion" }, { "paragraph_id": 56, "text": "The Incas made human sacrifices. As many as 4,000 servants, court officials, favorites and concubines were killed upon the death of the Inca Huayna Capac in 1527. The Incas performed child sacrifices around important events, such as the death of the Sapa Inca or during a famine. These sacrifices were known as qhapaq hucha.", "title": "Religion" }, { "paragraph_id": 57, "text": "The Incas were polytheists who worshipped many gods. These included:", "title": "Religion" }, { "paragraph_id": 58, "text": "The Inca Empire employed central planning. The Inca Empire traded with outside regions, although they did not operate a substantial internal market economy. While axe-monies were used along the northern coast, presumably by the provincial mindaláe trading class, most households in the empire lived in a traditional economy in which households were required to pay taxes, usually in the form of the mit'a corvée labor, and military obligations, though barter (or trueque) was present in some areas. In return, the state provided security, food in times of hardship through the supply of emergency resources, agricultural projects (e.g. aqueducts and terraces) to increase productivity, and occasional feasts hosted by Inca officials for their subjects. While mit'a was used by the state to obtain labor, individual villages had a pre-inca system of communal work, known as mink'a. This system survives to the modern day, known as mink'a or faena. The economy rested on the material foundations of the vertical archipelago, a system of ecological complementarity in accessing resources and the cultural foundation of ayni, or reciprocal exchange.", "title": "Economy" }, { "paragraph_id": 59, "text": "The Sapa Inca was conceptualized as divine and was effectively head of the state religion. The Willaq Umu (or Chief Priest) was second to the emperor. Local religious traditions continued and in some cases such as the Oracle at Pachacamac on the Peruvian coast, were officially venerated. Following Pachacuti, the Sapa Inca claimed descent from Inti, who placed a high value on imperial blood; by the end of the empire, it was common to incestuously wed brother and sister. He was \"son of the sun\", and his people the intip churin, or \"children of the sun\", and both his right to rule and mission to conquer derived from his holy ancestor. The Sapa Inca also presided over ideologically important festivals, notably during the Inti Raymi, or \"Sunfest\" attended by soldiers, mummified rulers, nobles, clerics and the general population of Cusco beginning on the June solstice and culminating nine days later with the ritual breaking of the earth using a foot plow by the Inca. Moreover, Cusco was considered cosmologically central, loaded as it was with huacas and radiating ceque lines as the geographic center of the Four-Quarters; Inca Garcilaso de la Vega called it \"the navel of the universe\".", "title": "Government" }, { "paragraph_id": 60, "text": "The Inca Empire was a decentralized government consisting of a central government with the Inca at its head and four regional quarters, or suyu: Chinchay Suyu (NW), Anti Suyu (NE), Kunti Suyu (SW) and Qulla Suyu (SE). The four corners of these quarters met at the center, Cusco. These suyu were likely created around 1460 during the reign of Pachacuti before the empire reached its largest territorial extent. At the time the suyu were established they were roughly of equal size and only later changed their proportions as the empire expanded north and south along the Andes.", "title": "Government" }, { "paragraph_id": 61, "text": "Cusco was likely not organized as a wamani, or province. Rather, it was probably somewhat akin to a modern federal district, like Washington, DC or Mexico City. The city sat at the center of the four suyu and served as the preeminent center of politics and religion. While Cusco was essentially governed by the Sapa Inca, his relatives and the royal panaqa lineages, each suyu was governed by an Apu, a term of esteem used for men of high status and for venerated mountains. Both Cusco as a district and the four suyu as administrative regions were grouped into upper hanan and lower hurin divisions. As the Inca did not have written records, it is impossible to exhaustively list the constituent wamani. However, colonial records allow us to reconstruct a partial list. There were likely more than 86 wamani, with more than 48 in the highlands and more than 38 on the coast.", "title": "Government" }, { "paragraph_id": 62, "text": "The most populous suyu was Chinchaysuyu, which encompassed the former Chimu empire and much of the northern Andes. At its largest extent, it extended through much of what are now Ecuador and Colombia.", "title": "Government" }, { "paragraph_id": 63, "text": "The largest suyu by area was Qullasuyu, named after the Aymara-speaking Qulla people. It encompassed what is now the Bolivian Altiplano and much of the southern Andes, reaching what is now Argentina and as far south as the Maipo or Maule river in modern Central Chile. Historian José Bengoa singled out Quillota as likely being the foremost Inca settlement in Chile.", "title": "Government" }, { "paragraph_id": 64, "text": "The second smallest suyu, Antisuyu, was northwest of Cusco in the high Andes. Its name is the root of the word \"Andes\".", "title": "Government" }, { "paragraph_id": 65, "text": "Kuntisuyu was the smallest suyu, located along the southern coast of modern Peru, extending into the highlands towards Cusco.", "title": "Government" }, { "paragraph_id": 66, "text": "The Inca state had no separate judiciary or codified laws. Customs, expectations and traditional local power holders governed behavior. The state had legal force, such as through tokoyrikoq (lit. \"he who sees all\"), or inspectors. The highest such inspector, typically a blood relative to the Sapa Inca, acted independently of the conventional hierarchy, providing a point of view for the Sapa Inca free of bureaucratic influence.", "title": "Government" }, { "paragraph_id": 67, "text": "The Inca had three moral precepts that governed their behavior:", "title": "Government" }, { "paragraph_id": 68, "text": "Colonial sources are not entirely clear or in agreement about Inca government structure, such as exact duties and functions of government positions. But the basic structure can be broadly described. The top was the Sapa Inca. Below that may have been the Willaq Umu, literally the \"priest who recounts\", the High Priest of the Sun. However, beneath the Sapa Inca also sat the Inkap rantin, who was a confidant and assistant to the Sapa Inca, perhaps similar to a Prime Minister. Starting with Topa Inca Yupanqui, a \"Council of the Realm\" was composed of 16 nobles: 2 from hanan Cusco; 2 from hurin Cusco; 4 from Chinchaysuyu; 2 from Cuntisuyu; 4 from Collasuyu; and 2 from Antisuyu. This weighting of representation balanced the hanan and hurin divisions of the empire, both within Cusco and within the Quarters (hanan suyukuna and hurin suyukuna).", "title": "Government" }, { "paragraph_id": 69, "text": "While provincial bureaucracy and government varied greatly, the basic organization was decimal. Taxpayers – male heads of household of a certain age range – were organized into corvée labor units (often doubling as military units) that formed the state's muscle as part of mit'a service. Each unit of more than 100 tax-payers were headed by a kuraka, while smaller units were headed by a kamayuq, a lower, non-hereditary status. However, while kuraka status was hereditary and typically served for life, the position of a kuraka in the hierarchy was subject to change based on the privileges of superiors in the hierarchy; a pachaka kuraka could be appointed to the position by a waranqa kuraka. Furthermore, one kuraka in each decimal level could serve as the head of one of the nine groups at a lower level, so that a pachaka kuraka might also be a waranqa kuraka, in effect directly responsible for one unit of 100 tax-payers and less directly responsible for nine other such units.", "title": "Government" }, { "paragraph_id": 70, "text": "We can assure your majesty that it is so beautiful and has such fine buildings that it would even be remarkable in Spain.", "title": "Arts and technology" }, { "paragraph_id": 71, "text": "Francisco Pizarro", "title": "Arts and technology" }, { "paragraph_id": 72, "text": "Architecture was the most important of the Incan arts, with textiles reflecting architectural motifs. The most notable example is Machu Picchu, which was constructed by Inca engineers. The prime Inca structures were made of stone blocks that fit together so well that a knife could not be fitted through the stonework. These constructs have survived for centuries, with no use of mortar to sustain them.", "title": "Arts and technology" }, { "paragraph_id": 73, "text": "This process was first used on a large scale by the Pucara (c. 300 BC–AD 300) peoples to the south in Lake Titicaca and later in the city of Tiwanaku (c. AD 400–1100) in what is now Bolivia. The rocks were sculpted to fit together exactly by repeatedly lowering a rock onto another and carving away any sections on the lower rock where the dust was compressed. The tight fit and the concavity on the lower rocks made them extraordinarily stable, despite the ongoing challenge of earthquakes and volcanic activity.", "title": "Arts and technology" }, { "paragraph_id": 74, "text": "Physical measures used by the Inca were based on human body parts. Units included fingers, the distance from thumb to forefinger, palms, cubits and wingspans. The most basic distance unit was thatkiy or thatki, or one pace. The next largest unit was reported by Cobo to be the topo or tupu, measuring 6,000 thatkiys, or about 7.7 km (4.8 mi); careful study has shown that a range of 4.0 to 6.3 km (2.5 to 3.9 mi) is likely. Next was the wamani, composed of 30 topos (roughly 232 km or 144 mi). To measure area, 25 by 50 wingspans were used, reckoned in topos (roughly 3,280 km or 1,270 sq mi). It seems likely that distance was often interpreted as one day's walk; the distance between tambo way-stations varies widely in terms of distance, but far less in terms of time to walk that distance.", "title": "Arts and technology" }, { "paragraph_id": 75, "text": "Inca calendars were strongly tied to astronomy. Inca astronomers understood equinoxes, solstices and zenith passages, along with the Venus cycle. They could not, however, predict eclipses. The Inca calendar was essentially lunisolar, as two calendars were maintained in parallel, one solar and one lunar. As 12 lunar months fall 11 days short of a full 365-day solar year, those in charge of the calendar had to adjust every winter solstice. Each lunar month was marked with festivals and rituals. Apparently, the days of the week were not named and days were not grouped into weeks. Similarly, months were not grouped into seasons. Time during a day was not measured in hours or minutes, but in terms of how far the sun had travelled or in how long it had taken to perform a task.", "title": "Arts and technology" }, { "paragraph_id": 76, "text": "The sophistication of Inca administration, calendrics and engineering required facility with numbers. Numerical information was stored in the knots of quipu strings, allowing for compact storage of large numbers. These numbers were stored in base-10 digits, the same base used by the Quechua language and in administrative and military units. These numbers, stored in quipu, could be calculated on yupanas, grids with squares of positionally varying mathematical values, perhaps functioning as an abacus. Calculation was facilitated by moving piles of tokens, seeds or pebbles between compartments of the yupana. It is likely that Inca mathematics at least allowed division of integers into integers or fractions and multiplication of integers and fractions.", "title": "Arts and technology" }, { "paragraph_id": 77, "text": "According to mid-17th-century Jesuit chronicler Bernabé Cobo, the Inca designated officials to perform accounting-related tasks. These officials were called quipo camayos. Study of khipu sample VA 42527 (Museum für Völkerkunde, Berlin) revealed that the numbers arranged in calendrically significant patterns were used for agricultural purposes in the \"farm account books\" kept by the khipukamayuq (accountant or warehouse keeper) to facilitate the closing of accounting books.", "title": "Arts and technology" }, { "paragraph_id": 78, "text": "Tunics were created by skilled Incan textile-makers as a piece of warm clothing, but they also symbolized cultural and political status and power. Cumbi was the fine, tapestry-woven woolen cloth that was produced and necessary for the creation of tunics. Cumbi was produced by specially-appointed women and men. Generally, textile-making was practiced by both men and women. As emphasized by certain historians, only with European conquest was it deemed that women would become the primary weavers in society, as opposed to Incan society where specialty textiles were produced by men and women equally.", "title": "Arts and technology" }, { "paragraph_id": 79, "text": "Complex patterns and designs were meant to convey information about order in Andean society as well as the Universe. Tunics could also symbolize one's relationship to ancient rulers or important ancestors. These textiles were frequently designed to represent the physical order of a society, for example, the flow of tribute within an empire. Many tunics have a \"checkerboard effect\" which is known as the collcapata. According to historians Kenneth Mills, William B. Taylor, and Sandra Lauderdale Graham, the collcapata patterns \"seem to have expressed concepts of commonality, and, ultimately, unity of all ranks of people, representing a careful kind of foundation upon which the structure of Inkaic universalism was built.\" Rulers wore various tunics throughout the year, switching them out for different occasions and feasts.", "title": "Arts and technology" }, { "paragraph_id": 80, "text": "The symbols present within the tunics suggest the importance of \"pictographic expression\" within Inkan and other Andean societies far before the iconographies of the Spanish Christians.", "title": "Arts and technology" }, { "paragraph_id": 81, "text": "Uncu was a men's garment similar to a tunic. It was an upper-body garment of knee-length; Royals wore it with a mantle cloth called ''yacolla.''", "title": "Arts and technology" }, { "paragraph_id": 82, "text": "Ceramics were painted using the polychrome technique portraying numerous motifs including animals, birds, waves, felines (popular in the Chavin culture) and geometric patterns found in the Nazca style of ceramics. In a culture without a written language, ceramics portrayed the basic scenes of everyday life, including the smelting of metals, relationships and scenes of tribal warfare. The most distinctive Inca ceramic objects are the Cusco bottles or \"aryballos\". Many of these pieces are on display in Lima in the Larco Archaeological Museum and the National Museum of Archaeology, Anthropology and History.", "title": "Arts and technology" }, { "paragraph_id": 83, "text": "Almost all of the gold and silver work of the Incan empire was melted down by the conquistadors, and shipped back to Spain.", "title": "Arts and technology" }, { "paragraph_id": 84, "text": "The Inca recorded information on assemblages of knotted strings, known as Quipu, although they can no longer be decoded. Originally it was thought that Quipu were used only as mnemonic devices or to record numerical data. Quipus are also believed to record history and literature.", "title": "Arts and technology" }, { "paragraph_id": 85, "text": "The Inca made many discoveries in medicine. They performed successful skull surgery, by cutting holes in the skull to alleviate fluid buildup and inflammation caused by head wounds. Many skull surgeries performed by Inca surgeons were successful. Survival rates were 80–90%, compared to about 30% before Inca times.", "title": "Arts and technology" }, { "paragraph_id": 86, "text": "The Incas revered the coca plant as sacred/magical. Its leaves were used in moderate amounts to lessen hunger and pain during work, but were mostly used for religious and health purposes. The Spaniards took advantage of the effects of chewing coca leaves. The Chasqui, messengers who ran throughout the empire to deliver messages, chewed coca leaves for extra energy. Coca leaves were also used as an anaesthetic during surgeries.", "title": "Arts and technology" }, { "paragraph_id": 87, "text": "The Inca army was the most powerful at that time, because any ordinary villager or farmer could be recruited as a soldier as part of the mit'a system of mandatory public service. Every able bodied male Inca of fighting age had to take part in war in some capacity at least once and to prepare for warfare again when needed. By the time the empire reached its largest size, every section of the empire contributed in setting up an army for war.", "title": "Arts and technology" }, { "paragraph_id": 88, "text": "The Incas had no iron or steel and their weapons were not much more effective than those of their opponents so they often defeated opponents by sheer force of numbers, or else by persuading them to surrender beforehand by offering generous terms. Inca weaponry included \"hardwood spears launched using throwers, arrows, javelins, slings, the bolas, clubs, and maces with star-shaped heads made of copper or bronze\". Rolling rocks downhill onto the enemy was a common strategy, taking advantage of the hilly terrain. Fighting was sometimes accompanied by drums and trumpets made of wood, shell or bone. Armor included:", "title": "Arts and technology" }, { "paragraph_id": 89, "text": "Roads allowed quick movement (on foot) for the Inca army and shelters called tambo and storage silos called qullqas were built one day's travelling distance from each other, so that an army on campaign could always be fed and rested. This can be seen in names of ruins such as Ollantay Tambo, or My Lord's Storehouse. These were set up so the Inca and his entourage would always have supplies (and possibly shelter) ready as they traveled.", "title": "Arts and technology" }, { "paragraph_id": 90, "text": "Chronicles and references from the 16th and 17th centuries support the idea of a banner. However, it represented the Inca (emperor), not the empire.", "title": "Arts and technology" }, { "paragraph_id": 91, "text": "Francisco López de Jerez wrote in 1534:", "title": "Arts and technology" }, { "paragraph_id": 92, "text": "... todos venían repartidos en sus escuadras con sus banderas y capitanes que los mandan, con tanto concierto como turcos.(... all of them came distributed into squads, with their flags and captains commanding them, as well-ordered as Turks.)", "title": "Arts and technology" }, { "paragraph_id": 93, "text": "Chronicler Bernabé Cobo wrote:", "title": "Arts and technology" }, { "paragraph_id": 94, "text": "The royal standard or banner was a small square flag, ten or twelve spans around, made of cotton or wool cloth, placed on the end of a long staff, stretched and stiff such that it did not wave in the air and on it each king painted his arms and emblems, for each one chose different ones, though the sign of the Incas was the rainbow and two parallel snakes along the width with the tassel as a crown, which each king used to add for a badge or blazon those preferred, like a lion, an eagle and other figures. (... el guión o estandarte real era una banderilla cuadrada y pequeña, de diez o doce palmos de ruedo, hecha de lienzo de algodón o de lana, iba puesta en el remate de una asta larga, tendida y tiesa, sin que ondease al aire, y en ella pintaba cada rey sus armas y divisas, porque cada uno las escogía diferentes, aunque las generales de los Incas eran el arco celeste y dos culebras tendidas a lo largo paralelas con la borda que le servía de corona, a las cuales solía añadir por divisa y blasón cada rey las que le parecía, como un león, un águila y otras figuras.)-Bernabé Cobo, Historia del Nuevo Mundo (1653)", "title": "Arts and technology" }, { "paragraph_id": 95, "text": "Guaman Poma's 1615 book, El primer nueva corónica y buen gobierno, shows numerous line drawings of Inca flags. In his 1847 book A History of the Conquest of Peru, \"William H. Prescott ... says that in the Inca army each company had its particular banner and that the imperial standard, high above all, displayed the glittering device of the rainbow, the armorial ensign of the Incas.\" A 1917 world flags book says the Inca \"heir-apparent ... was entitled to display the royal standard of the rainbow in his military campaigns.\"", "title": "Arts and technology" }, { "paragraph_id": 96, "text": "In modern times the rainbow flag has been wrongly associated with the Tawantinsuyu and displayed as a symbol of Inca heritage by some groups in Peru and Bolivia. The city of Cusco also flies the Rainbow Flag, but as an official flag of the city. The Peruvian president Alejandro Toledo (2001–2006) flew the Rainbow Flag in Lima's presidential palace. However, according to Peruvian historiography, the Inca Empire never had a flag. Peruvian historian María Rostworowski said, \"I bet my life, the Inca never had that flag, it never existed, no chronicler mentioned it\". Also, to the Peruvian newspaper El Comercio, the flag dates to the first decades of the 20th century, and even the Congress of the Republic of Peru has determined that the flag is a fake by citing the conclusion of the National Academy of Peruvian History:", "title": "Arts and technology" }, { "paragraph_id": 97, "text": "\"The official use of the wrongly called 'Tawantinsuyu flag' is a mistake. In the Pre-Hispanic Andean World there did not exist the concept of a flag, it did not belong to their historic context\". National Academy of Peruvian History", "title": "Arts and technology" }, { "paragraph_id": 98, "text": "The people of the Andes, including the Incas, were able to adapt to high-altitude living through successful acclimatization, which is characterized by increasing oxygen supply to the blood tissues. For the native living in the Andean highlands, this was achieved through the development of a larger lung capacity, and an increase in red blood cell counts, hemoglobin concentration, and capillary beds.", "title": "Adaptations to altitude" }, { "paragraph_id": 99, "text": "Compared to other humans, the Andeans had slower heart rates, almost one-third larger lung capacity, about 2 L (4 pints) more blood volume and double the amount of hemoglobin, which transfers oxygen from the lungs to the rest of the body. While the Conquistadors may have been taller, the Inca had the advantage of coping with the extraordinary altitude. The Tibetans in Asia living in the Himalayas are also adapted to living in high-altitudes, although the adaptation is different from that of the Andeans.", "title": "Adaptations to altitude" }, { "paragraph_id": 100, "text": "", "title": "External links" } ]
The Inca Empire, called Tawantinsuyu by its subjects, was the largest empire in pre-Columbian America. The administrative, political, and military center of the empire was in the city of Cusco. The Inca civilization rose from the Peruvian highlands sometime in the early 13th century. The Spanish began the conquest of the Inca Empire in 1532 and by 1572, the last Inca state was fully conquered. From 1438 to 1533, the Incas incorporated a large portion of western South America, centered on the Andean Mountains, using conquest and peaceful assimilation, among other methods. At its largest, the empire joined modern-day Peru, what are now western Ecuador, western and south central Bolivia, northwest Argentina, the southwesternmost tip of Colombia and a large portion of modern-day Chile into a state comparable to the historical empires of Eurasia. Its official language was Quechua. The Inca Empire was unique in that it lacked many of the features associated with civilization in the Old World. Anthropologist Gordon McEwan wrote that the Incas were able to construct "one of the greatest imperial states in human history" without the use of the wheel, draft animals, knowledge of iron or steel, or even a system of writing. Notable features of the Inca Empire included its monumental architecture, especially stonework, extensive road network reaching all corners of the empire, finely-woven textiles, use of knotted strings (quipu) for record keeping and communication, agricultural innovations and production in a difficult environment, and the organization and management fostered or imposed on its people and their labor. The Inca Empire functioned largely without money and without markets. Instead, exchange of goods and services was based on reciprocity between individuals and among individuals, groups, and Inca rulers. "Taxes" consisted of a labour obligation of a person to the Empire. The Inca rulers reciprocated by granting access to land and goods and providing food and drink in celebratory feasts for their subjects. Many local forms of worship persisted in the empire, most of them concerning local sacred Huacas, but the Inca leadership encouraged the sun worship of Inti – their sun god – and imposed its sovereignty above other cults such as that of Pachamama. The Incas considered their king, the Sapa Inca, to be the "son of the sun". The Incan economy is a subject of scholarly debate. Darrell E. La Lone, in his work The Inca as a Nonmarket Economy, noted that scholars have described it as "feudal, slave, [or] socialist," as well as "a system based on reciprocity and redistribution; a system with markets and commerce; or an Asiatic mode of production."
2001-12-04T22:42:11Z
2023-12-28T09:50:31Z
[ "Template:Anchor", "Template:Cite encyclopedia", "Template:American monarchies", "Template:Use dmy dates", "Template:Authority control", "Template:Sfn", "Template:See also", "Template:Library resources box", "Template:Empires", "Template:Short description", "Template:Cite news", "Template:Citation", "Template:Clear", "Template:Inca Empire topics", "Template:Pre-Columbian", "Template:Peru topics", "Template:Pp-vandalism", "Template:Reflist", "Template:Cbignore", "Template:Notelist", "Template:Cite journal", "Template:Div col", "Template:Div col end", "Template:OCLC", "Template:Circa", "Template:Convert", "Template:Cite book", "Template:Cite web", "Template:Commons and category", "Template:Infobox Former Country", "Template:Lang", "Template:Portal", "Template:ISBN", "Template:Efn", "Template:Further", "Template:Main", "Template:Quote box", "Template:Webarchive", "Template:Redirect-multi", "Template:Inca civilization" ]
https://en.wikipedia.org/wiki/Inca_Empire
15,321
Inca (disambiguation)
The Inca Empire was the largest empire in pre-Columbian America. Inca, Inka, or İncə may also refer to:
[ { "paragraph_id": 0, "text": "The Inca Empire was the largest empire in pre-Columbian America.", "title": "" }, { "paragraph_id": 1, "text": "Inca, Inka, or İncə may also refer to:", "title": "" } ]
The Inca Empire was the largest empire in pre-Columbian America. Inca, Inka, or İncə may also refer to: Inca civilization, centered in what is now Peru Inca people, the people of the Inca Empire Quechua people, the people of the Inca civilization Inca language, the Quechuan languages Sapa Inca or Inka, the main ruler of the Inca Empire
2001-12-04T22:49:24Z
2023-10-30T05:45:35Z
[ "Template:Intitle", "Template:Lookfrom", "Template:Lang-fr", "Template:Disambiguation", "Template:Wiktionary", "Template:TOC right", "Template:Ship", "Template:Canned search" ]
https://en.wikipedia.org/wiki/Inca_(disambiguation)
15,323
Internet Protocol
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet. IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information. IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP. The first major version of IP, Internet Protocol version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol version 6 (IPv6), which has been in increasing deployment on the public Internet since around 2006. The Internet Protocol is responsible for addressing host interfaces, encapsulating data into datagrams (including fragmentation and reassembly) and routing datagrams from a source host interface to a destination host interface across one or more IP networks. For these purposes, the Internet Protocol defines the format of packets and provides an addressing system. Each datagram has two components: a header and a payload. The IP header includes a source IP address, a destination IP address, and other metadata needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation. IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnets, involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network. In May 1974, the Institute of Electrical and Electronics Engineers (IEEE) published a paper entitled "A Protocol for Packet Network Intercommunication". The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet switching among network nodes. A central control component of this model was the "Transmission Control Program" that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer. The model became known as the Department of Defense (DoD) Internet Model and Internet protocol suite, and informally as TCP/IP. IP versions 1 to 3 were experimental versions, designed between 1973 and 1978. The following Internet Experiment Note (IEN) documents describe version 3 of the Internet Protocol, prior to the modern version of IPv4: The dominant internetworking protocol in the Internet Layer in use is IPv4; the number 4 identifies the protocol version, carried in every IP datagram. IPv4 is described in RFC 791 (1981). Versions 2 and 3, and a draft of version 4, allowed an address length of up to 128 bits, but this was reduced to 32 bits in the final version of IPv4. Version number 5 was used by the Internet Stream Protocol, an experimental streaming protocol that was not adopted. The successor to IPv4 is IPv6. IPv6 was a result of several years of experimentation and dialog during which various protocol models were proposed, such as TP/IX (RFC 1475), PIP (RFC 1621) and TUBA (TCP and UDP with Bigger Addresses, RFC 1347). Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. 4.3 billion (4.3×10) addresses, IPv6 uses 128-bit addresses providing c. 3.4×10 addresses. Although adoption of IPv6 has been slow, as of January 2023, most countries in the world show significant adoption of IPv6, with over 41% of Google's traffic being carried over IPv6 connections. The assignment of the new protocol as IPv6 was uncertain until due diligence assured that IPv6 had not been used previously. Other Internet Layer protocols have been assigned version numbers, such as 7 (IP/TX), 8 and 9 (historic). Notably, on April 1, 1994, the IETF published an April Fools' Day joke about IPv9. IPv9 was also used in an alternate proposed address space expansion called TUBA. A 2004 Chinese proposal for an "IPv9" protocol appears to be unrelated to all of these, and is not endorsed by the IETF. The design of the Internet protocol suite adheres to the end-to-end principle, a concept adapted from the CYCLADES project. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of the availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is located in the end nodes. As a consequence of this design, the Internet Protocol only provides best-effort delivery and its service is characterized as unreliable. In network architectural parlance, it is a connectionless protocol, in contrast to connection-oriented communication. Various fault conditions may occur, such as data corruption, packet loss and duplication. Because routing is dynamic, meaning every packet is treated independently, and because the network maintains no state based on the path of prior packets, different packets may be routed to the same destination via different paths, resulting in out-of-order delivery to the receiver. All fault conditions in the network must be detected and compensated by the participating end nodes. The upper layer protocols of the Internet protocol suite are responsible for resolving reliability issues. For example, a host may buffer network data to ensure correct ordering before the data is delivered to an application. IPv4 provides safeguards to ensure that the header of an IP packet is error-free. A routing node discards packets that fail a header checksum test. Although the Internet Control Message Protocol (ICMP) provides notification of errors, a routing node is not required to notify either end node of errors. IPv6, by contrast, operates without header checksums, since current link layer technology is assumed to provide sufficient error detection. The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested. One of the technical constraints is the size of data packets possible on a given link. Facilities exist to examine the maximum transmission unit (MTU) size of the local link and Path MTU Discovery can be used for the entire intended path to the destination. The IPv4 internetworking layer automatically fragments a datagram into smaller units for transmission when the link MTU is exceeded. IP provides re-ordering of fragments received out of order. An IPv6 network does not perform fragmentation in network elements, but requires end hosts and higher-layer protocols to avoid exceeding the path MTU. The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and ICMP disregard MTU size, thereby forcing IP to fragment oversized datagrams. During the design phase of the ARPANET and the early Internet, the security aspects and needs of a public, international network could not be adequately anticipated. Consequently, many Internet protocols exhibited vulnerabilities highlighted by network attacks and later security assessments. In 2008, a thorough security assessment and proposed mitigation of problems was published. The IETF has been pursuing further studies.
[ { "paragraph_id": 0, "text": "The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet.", "title": "" }, { "paragraph_id": 1, "text": "IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information.", "title": "" }, { "paragraph_id": 2, "text": "IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP.", "title": "" }, { "paragraph_id": 3, "text": "The first major version of IP, Internet Protocol version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol version 6 (IPv6), which has been in increasing deployment on the public Internet since around 2006.", "title": "" }, { "paragraph_id": 4, "text": "The Internet Protocol is responsible for addressing host interfaces, encapsulating data into datagrams (including fragmentation and reassembly) and routing datagrams from a source host interface to a destination host interface across one or more IP networks. For these purposes, the Internet Protocol defines the format of packets and provides an addressing system.", "title": "Function" }, { "paragraph_id": 5, "text": "Each datagram has two components: a header and a payload. The IP header includes a source IP address, a destination IP address, and other metadata needed to route and deliver the datagram. The payload is the data that is transported. This method of nesting the data payload in a packet with a header is called encapsulation.", "title": "Function" }, { "paragraph_id": 6, "text": "IP addressing entails the assignment of IP addresses and associated parameters to host interfaces. The address space is divided into subnets, involving the designation of network prefixes. IP routing is performed by all hosts, as well as routers, whose main function is to transport packets across network boundaries. Routers communicate with one another via specially designed routing protocols, either interior gateway protocols or exterior gateway protocols, as needed for the topology of the network.", "title": "Function" }, { "paragraph_id": 7, "text": "In May 1974, the Institute of Electrical and Electronics Engineers (IEEE) published a paper entitled \"A Protocol for Packet Network Intercommunication\". The paper's authors, Vint Cerf and Bob Kahn, described an internetworking protocol for sharing resources using packet switching among network nodes. A central control component of this model was the \"Transmission Control Program\" that incorporated both connection-oriented links and datagram services between hosts. The monolithic Transmission Control Program was later divided into a modular architecture consisting of the Transmission Control Protocol and User Datagram Protocol at the transport layer and the Internet Protocol at the internet layer. The model became known as the Department of Defense (DoD) Internet Model and Internet protocol suite, and informally as TCP/IP.", "title": "Version history" }, { "paragraph_id": 8, "text": "IP versions 1 to 3 were experimental versions, designed between 1973 and 1978. The following Internet Experiment Note (IEN) documents describe version 3 of the Internet Protocol, prior to the modern version of IPv4:", "title": "Version history" }, { "paragraph_id": 9, "text": "The dominant internetworking protocol in the Internet Layer in use is IPv4; the number 4 identifies the protocol version, carried in every IP datagram. IPv4 is described in RFC 791 (1981).", "title": "Version history" }, { "paragraph_id": 10, "text": "Versions 2 and 3, and a draft of version 4, allowed an address length of up to 128 bits, but this was reduced to 32 bits in the final version of IPv4.", "title": "Version history" }, { "paragraph_id": 11, "text": "Version number 5 was used by the Internet Stream Protocol, an experimental streaming protocol that was not adopted.", "title": "Version history" }, { "paragraph_id": 12, "text": "The successor to IPv4 is IPv6. IPv6 was a result of several years of experimentation and dialog during which various protocol models were proposed, such as TP/IX (RFC 1475), PIP (RFC 1621) and TUBA (TCP and UDP with Bigger Addresses, RFC 1347). Its most prominent difference from version 4 is the size of the addresses. While IPv4 uses 32 bits for addressing, yielding c. 4.3 billion (4.3×10) addresses, IPv6 uses 128-bit addresses providing c. 3.4×10 addresses. Although adoption of IPv6 has been slow, as of January 2023, most countries in the world show significant adoption of IPv6, with over 41% of Google's traffic being carried over IPv6 connections.", "title": "Version history" }, { "paragraph_id": 13, "text": "The assignment of the new protocol as IPv6 was uncertain until due diligence assured that IPv6 had not been used previously. Other Internet Layer protocols have been assigned version numbers, such as 7 (IP/TX), 8 and 9 (historic). Notably, on April 1, 1994, the IETF published an April Fools' Day joke about IPv9. IPv9 was also used in an alternate proposed address space expansion called TUBA. A 2004 Chinese proposal for an \"IPv9\" protocol appears to be unrelated to all of these, and is not endorsed by the IETF.", "title": "Version history" }, { "paragraph_id": 14, "text": "The design of the Internet protocol suite adheres to the end-to-end principle, a concept adapted from the CYCLADES project. Under the end-to-end principle, the network infrastructure is considered inherently unreliable at any single network element or transmission medium and is dynamic in terms of the availability of links and nodes. No central monitoring or performance measurement facility exists that tracks or maintains the state of the network. For the benefit of reducing network complexity, the intelligence in the network is located in the end nodes.", "title": "Reliability" }, { "paragraph_id": 15, "text": "As a consequence of this design, the Internet Protocol only provides best-effort delivery and its service is characterized as unreliable. In network architectural parlance, it is a connectionless protocol, in contrast to connection-oriented communication. Various fault conditions may occur, such as data corruption, packet loss and duplication. Because routing is dynamic, meaning every packet is treated independently, and because the network maintains no state based on the path of prior packets, different packets may be routed to the same destination via different paths, resulting in out-of-order delivery to the receiver.", "title": "Reliability" }, { "paragraph_id": 16, "text": "All fault conditions in the network must be detected and compensated by the participating end nodes. The upper layer protocols of the Internet protocol suite are responsible for resolving reliability issues. For example, a host may buffer network data to ensure correct ordering before the data is delivered to an application.", "title": "Reliability" }, { "paragraph_id": 17, "text": "IPv4 provides safeguards to ensure that the header of an IP packet is error-free. A routing node discards packets that fail a header checksum test. Although the Internet Control Message Protocol (ICMP) provides notification of errors, a routing node is not required to notify either end node of errors. IPv6, by contrast, operates without header checksums, since current link layer technology is assumed to provide sufficient error detection.", "title": "Reliability" }, { "paragraph_id": 18, "text": "The dynamic nature of the Internet and the diversity of its components provide no guarantee that any particular path is actually capable of, or suitable for, performing the data transmission requested. One of the technical constraints is the size of data packets possible on a given link. Facilities exist to examine the maximum transmission unit (MTU) size of the local link and Path MTU Discovery can be used for the entire intended path to the destination.", "title": "Link capacity and capability" }, { "paragraph_id": 19, "text": "The IPv4 internetworking layer automatically fragments a datagram into smaller units for transmission when the link MTU is exceeded. IP provides re-ordering of fragments received out of order. An IPv6 network does not perform fragmentation in network elements, but requires end hosts and higher-layer protocols to avoid exceeding the path MTU.", "title": "Link capacity and capability" }, { "paragraph_id": 20, "text": "The Transmission Control Protocol (TCP) is an example of a protocol that adjusts its segment size to be smaller than the MTU. The User Datagram Protocol (UDP) and ICMP disregard MTU size, thereby forcing IP to fragment oversized datagrams.", "title": "Link capacity and capability" }, { "paragraph_id": 21, "text": "During the design phase of the ARPANET and the early Internet, the security aspects and needs of a public, international network could not be adequately anticipated. Consequently, many Internet protocols exhibited vulnerabilities highlighted by network attacks and later security assessments. In 2008, a thorough security assessment and proposed mitigation of problems was published. The IETF has been pursuing further studies.", "title": "Security" } ]
The Internet Protocol (IP) is the network layer communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet. IP has the task of delivering packets from the source host to the destination host solely based on the IP addresses in the packet headers. For this purpose, IP defines packet structures that encapsulate the data to be delivered. It also defines addressing methods that are used to label the datagram with source and destination information. IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, which was complemented by a connection-oriented service that became the basis for the Transmission Control Protocol (TCP). The Internet protocol suite is therefore often referred to as TCP/IP. The first major version of IP, Internet Protocol version 4 (IPv4), is the dominant protocol of the Internet. Its successor is Internet Protocol version 6 (IPv6), which has been in increasing deployment on the public Internet since around 2006.
2001-11-05T15:09:42Z
2023-12-27T06:34:00Z
[ "Template:Cite book", "Template:Authority control", "Template:Portal", "Template:Citation", "Template:ISBN", "Template:Wiktionary", "Template:IPstack", "Template:Reflist", "Template:Val", "Template:As of", "Template:Cite report", "Template:Cite web", "Template:Cite IETF", "Template:IPv6", "Template:Short description", "Template:IETF RFC", "Template:Cite journal", "Template:Dead link" ]
https://en.wikipedia.org/wiki/Internet_Protocol
15,328
Impeachment
Impeachment is a process by which a legislative body or other legally constituted tribunal initiates charges against a public official for misconduct. It may be understood as a unique process involving both political and legal elements. In Europe and Latin America, impeachment tends to be confined to ministerial officials as the unique nature of their positions may place ministers beyond the reach of the law to prosecute, or their misconduct is not codified into law as an offense except through the unique expectations of their high office. Both "peers and commoners" have been subject to the process, however. From 1990 to 2020, there have been at least 272 impeachment charges against 132 different heads of state in 63 countries. Most democracies (with the notable exception of the United States) involve the courts (often a national constitutional court) in some way. In Latin America, which includes almost 40% of the world's presidential systems, ten presidents from seven countries were removed from office by their national legislatures via impeachments or declarations of incapacity between 1978 and 2019. National legislations differ regarding both the consequences and definition of impeachment, but the intent is nearly always to expeditiously vacate the office. In most nations the process begins in the lower house of a bicameral assembly who bring charges of misconduct, then the upper house administers an impeachment trial and sentencing. Most commonly, an official is considered impeached after the house votes to accept the charges, and impeachment itself does not remove the official from office. Because impeachment involves a departure from the normal constitutional procedures by which individuals achieve high office (election, ratification, or appointment) and because it generally requires a supermajority, they are usually reserved for those deemed to have committed serious abuses of their office. In the United States, for example, impeachment at the federal level is limited to those who may have committed "Treason, Bribery, or other high crimes and misdemeanors"—the latter phrase referring to offenses against the government or the constitution, grave abuses of power, violations of the public trust, or other political crimes, even if not indictable criminal offenses. Under the United States Constitution, the House of Representatives has the sole power of impeachments while the Senate has the sole power to try impeachments (i.e., to acquit or convict); the validity of an impeachment trial is a political question that is nonjusticiable (i.e.., is not reviewable by the courts). In the United States, impeachment is a remedial rather than penal process, intended to "effectively 'maintain constitutional government' by removing individuals unfit for office"; persons subject to impeachment and removal remain "liable and subject to Indictment, Trial, Judgment and Punishment, according to Law." Impeachment is provided for in the constitutional laws of many countries including Brazil, France, India, Ireland, the Philippines, Russia, South Korea, and the United States. It is distinct from the motion of no confidence procedure available in some countries whereby a motion of censure can be used to remove a government and its ministers from office. Such a procedure is not applicable in countries with presidential forms of government like the United States. The word "impeachment" likely derives from Old French empeechier from Latin word impedīre expressing the idea of catching or ensnaring by the 'foot' (pes, pedis), and has analogues in the modern French verb empêcher (to prevent) and the modern English impede. Medieval popular etymology also associated it (wrongly) with derivations from the Latin impetere (to attack). The process was first used by the English "Good Parliament" against William Latimer, 4th Baron Latimer in the second half of the 14th century. Following the English example, the constitutions of Virginia (1776), Massachusetts (1780) and other states thereafter adopted the impeachment mechanism, but they restricted the punishment to removal of the official from office. In West Africa, kings of the Ashanti Empire who violated any of the oaths taken during their enstoolment were destooled by Kingmakers. For instance, if a king punished citizens arbitrarily or was exposed to be corrupt, he would be destooled. Destoolment entailed Kingmakers removing the sandals of the king and bumping his buttocks on the ground three times. Once destooled from office, his sanctity and thus reverence were lost, as he could not exercise any powers he had as king; this included Chief administrator, Judge, and Military Commander. The now previous king was disposed of the Stool, swords and other regalia which symbolized his office and authority. He also lost the position as custodian of the land. However, despite being destooled from office, the king remained a member of the royal family from which he was elected. In Brazil, as in most other Latin American countries, "impeachment" refers to the definitive removal from office. The president of Brazil may be provisionally removed from office by the Chamber of Deputies and then tried and definitely removed from office by the Federal Senate. The Brazilian Constitution requires that two-thirds of the Deputies vote in favor of the opening of the impeachment process of the President and that two-thirds of the Senators vote for impeachment. State governors and municipal mayors can also be impeached by the respective legislative bodies. Article 2 of Law no. 1.079, from 10 April 1950, or "The Law of Impeachment", states that "The crimes defined in this law, even when simply attempted, are subject to the penalty of loss of office, with disqualification for up to five years for the exercise of any public function, to be imposed by the Federal Senate in proceedings against the President of the Republic, Ministers of State, Ministers of the Supreme Federal Tribunal, or the Attorney General." Initiation: An accusation of a responsibility crime against the President may be brought by any Brazilian citizen; however, the President of the Chamber of Deputies holds prerogative to accept the charge, which if accepted will be read at the next session and reported to the President of the Republic. Extraordinary Committee: An extraordinary committee is established, consisting of members from each political party in proportion to their party's membership. The committee is responsible for assessing the need for impeachment proceedings. The President is given ten parliamentary sessions to present their defense. Following this, two legislative sessions are held to allow for the formulation of a legal opinion by a rapporteur regarding whether or not impeachment proceedings should be initiated and brought to trial in the Senate. The rapporteur's opinion is subject to a vote within the committee. If the majority accepts the rapporteur's opinion, it is deemed adopted. However, if the majority rejects the rapporteur's opinion, the committee adopts an alternative opinion proposed by the majority. For instance, if the rapporteur recommends against impeachment but fails to secure majority support, the committee will adopt the opinion to proceed with impeachment. Conversely, if the rapporteur advises impeachment but does not obtain majority approval, the committee will adopt the opinion not to impeach. If the committee vote is successful, the rapporteur's opinion is considered adopted, thereby determining the course of action regarding impeachment. Chamber of Deputies: The Chamber issues a call-out vote to accept the opinion of the Committee, requiring a supermajority of two thirds in favor of an impeachment opinion (or a supermajority of two thirds against a dismissal opinion) of the Committee, in order to authorize the Senate impeachment proceedings. The President is suspended (provisionally removed) from office as soon as the Senate receives and accepts from the Chamber of Deputies the impeachment charges and decides to proceed with a trial. The Senate: The process in the Senate had been historically lacking in procedural guidance until 1992, when the Senate published in the Official Diary of the Union the step-by-step procedure of the Senate's impeachment process, which involves the formation of another special committee and closely resembles the lower house process, with time constraints imposed on the steps taken. The committee's opinion must be presented within 10 days, after which it is put to a call-out vote at the next session. The vote must proceed within a single session; the vote on President Rousseff took over 20 hours. A simple majority vote in the Senate begins formal deliberation on the complaint, immediately suspends the President from office, installs the Vice President as acting president, and begins a 20-day period for written defense as well as up to 180-days for the trial. In the event the trial proceeds slowly and exceeds 180 days, the Brazilian Constitution determines that the President is entitled to return and stay provisionally in office until the trial comes to its decision. Senate plenary deliberation: The committee interrogates the accused or their counsel, from which they have a right to abstain, and also a probative session which guarantees the accused rights to contradiction, or audiatur et altera pars, allowing access to the courts and due process of law under Article 5 of the constitution. The accused has 15 days to present written arguments in defense and answer to the evidence gathered, and then the committee shall issue an opinion on the merits within ten days. The entire package is published for each senator before a single plenary session issues a call-out vote, which shall proceed to trial on a simple majority and close the case otherwise. Senate trial: A hearing for the complainant and the accused convenes within 48 hours of notification from deliberation, from which a trial is scheduled by the president of the Supreme Court no less than ten days after the hearing. The senators sit as judges, while witnesses are interrogated and cross-examined; all questions must be presented to the president of the Supreme Court, who, as prescribed in the Constitution, presides over the trial. The president of the Supreme Court allots time for debate and rebuttal, after which time the parties leave the chamber and the senators deliberate on the indictment. The President of the Supreme Court reads the summary of the grounds, the charges, the defense and the evidence to the Senate. The senators in turn issue their judgement. On conviction by a supermajority of two thirds, the president of the Supreme Court pronounces the sentence and the accused is immediately notified. If there is no supermajority for conviction, the accused is acquitted. Upon conviction, the officeholder has his or her political rights revoked for eight years, which bars them from running for any office during that time. Fernando Collor de Mello, the 32nd President of Brazil, resigned in 1992 amidst impeachment proceedings. Despite his resignation, the Senate nonetheless voted to convict him and bar him from holding any office for eight years, due to evidence of bribery and misappropriation. In 2016, the Chamber of Deputies initiated an impeachment case against President Dilma Rousseff on allegations of budgetary mismanagement, a crime of responsibility under the Constitution. On 12 May 2016, after 20 hours of deliberation, the admissibility of the accusation was approved by the Senate with 55 votes in favor and 22 against (an absolute majority would have been sufficient for this step) and Vice President Michel Temer was notified to assume the duties of the President pending trial. On August 31, 61 senators voted in favor of impeachment and 20 voted against it, thus achieving the 2⁄3 majority needed for Rousseff's definitive removal. A vote to disqualify her for five years was taken and failed (in spite of the Constitution not separating disqualification from removal) having less than two thirds in favor. The process of impeaching the president of Croatia can be initiated by a two-thirds majority vote in favor in the Sabor and is thereafter referred to the Constitutional Court, which must accept such a proposal with a two-thirds majority vote in favor in order for the president to be removed from office. This has never occurred in the history of the Republic of Croatia. In case of a successful impeachment motion a president's constitutional term of five years would be terminated and an election called within 60 days of the vacancy occurring. During the period of vacancy the presidential powers and duties would be carried out by the speaker of the Croatian Parliament in his/her capacity as Acting President of the Republic. In 2013, the constitution was changed. Since 2013, the process can be started by at least three-fifths of present senators, and must be approved by at least three-fifths of all members of the Chamber of Deputies within three months. Also, the President can be impeached for high treason (newly defined in the Constitution) or any serious infringement of the Constitution. The process starts in the Senate of the Czech Republic which has the right to only impeach the president. After the approval by the Chamber of Deputies, the case is passed to the Constitutional Court of the Czech Republic, which has to decide the verdict against the president. If the Court finds the President guilty, then the President is removed from office and is permanently barred from being elected President of the Czech Republic again. No Czech president has ever been impeached, though members of the Senate sought to impeach President Václav Klaus in 2013. This case was dismissed by the court, which reasoned that his mandate had expired. The Senate also proposed to impeach president Miloš Zeman in 2019 but the Chamber of Deputies did not vote on the issue in time and thus the case did not even proceed to the Court. In Denmark the possibility for current and former ministers being impeached was established with the Danish Constitution of 1849. Unlike many other countries Denmark does not have a Constitutional Court who would normally handle these types of cases. Instead Denmark has a special Court of Impeachment (In Danish: Rigsretten) which is called upon every time a current and former minister have been impeached. The role of the Impeachment Court is to process and deliver judgments against current and former ministers who are accused of unlawful conduct in office. The legal content of ministerial responsibility is laid down in the Ministerial Accountability Act which has its background in section 13 of the Danish Constitution, according to which the ministers' accountability is determined in more detail by law. In Denmark the normal practice in terms of impeachment cases is that it needs to be brought up in the Danish Parliament (Folketing) first for debate between the different members and parties in the parliament. After the debate the members of the Danish Parliament vote on whether a current or former minister needs to be impeached. If there is a majority in the Danish Parliament for an impeachment case against a current or former minister, an Impeachment Court is called into session. In Denmark the Impeachment Court consists of up to 15 Supreme Court judges and 15 parliament members appointed by the Danish Parliament. The members of the Impeachment Court in Denmark serve a six-year term in this position. In 1995 the former Minister of Justice Erik Ninn-Hansen from the Conservative People's Party was impeached in connection with the Tamil Case. The case was centered around the illegal processing of family reunification applications. From September 1987 to January 1989 applications for family reunification of Tamil refugees from civil war-torn Sri Lanka were put on hold in violation of Danish and International law. On 22 June 1995, Ninn-Hansen was found guilty of violating paragraph five subsection one of the Danish Ministerial Responsibility Act which says: A minister is punished if he intentionally or through gross negligence neglects the duties incumbent on him under the constitution or legislation in general or according to the nature of his post. A majority of the judges in that impeachment case voted for former Minister of Justice Erik Ninn-Hansen to receive a suspended sentence of four months with one year of probation. The reason why the sentence was made suspended was especially in relation to Ninn-Hansen's personal circumstances, in particular, his health and age – Ninn-Hansen was 73 years old when the sentence was handed down. After the verdict, Ninn-Hansen complained to the European Court of Human Rights and complained, among other things, that the Court of Impeachment was not impartial. The European Court of Human Rights dismissed the complaint on 18 May 1999. As a direct result and consequence of this case, the Conservative-led government and Prime Minister at that time Poul Schlüter was forced to step down from power. In February 2021 the former Minister for Immigration and Integration Inger Støjberg at that time member of the Danish Liberal Party Venstre was impeached when it was discovered that she had possibly against both Danish and International law tried to separate couples in refugee centres in Denmark, as the wives of the couples were under legal age. According to a commission report Inger Støjberg had also lied in the Danish Parliament and failed to report relevant details to the Parliamentary Ombudsman The decision to initiate an impeachment case was adopted by the Danish Parliament with a 141–30 vote and decision (In Denmark 90 members of the parliament need to vote for impeachment before it can be implemented). On 13 December 2021 former Minister for Immigration and Integration Inger Støjberg was convicted by the special Court of Impeachment of separating asylum seeker families illegally according to Danish and international law and sentenced to 60 days in prison. The majority of the judges in the special Court of Impeachment (25 out of 26 judges) found that it had been proven that Inger Støjberg on 10 February 2016 decided that an accommodation scheme should apply without the possibility of exceptions, so that all asylum-seeking spouses and cohabiting couples where one was a minor aged 15–17, had to be separated and accommodated separately in separate asylum centers. On 21 December, a majority in the Folketing voted that the sentence means that she is no longer worthy of sitting in the Folketing and she therefore immediately lost her seat. In France the comparable procedure is called destitution. The president of France can be impeached by the French Parliament for willfully violating the Constitution or the national laws. The process of impeachment is written in the 68th article of the French Constitution. A group of senators or a group of members of the National Assembly can begin the process. Then, both the National Assembly and the Senate must acknowledge the impeachment. After the upper and lower houses' agreement, they unite to form the High Court. Finally, the High Court must decide to declare the impeachment of the president of France—or not. The federal president of Germany can be impeached both by the Bundestag and by the Bundesrat for willfully violating federal law. Once the Bundestag or the Bundesrat impeaches the president, the Federal Constitutional Court decides whether the President is guilty as charged and, if this is the case, whether to remove him or her from office. The Federal Constitutional Court also has the power to remove federal judges from office for willfully violating core principles of the federal constitution or a state constitution. The impeachment procedure is regulated in Article 61 of the Basic Law for the Federal Republic of Germany. There is no formal impeachment process for the chancellor of Germany; however, the Bundestag can replace the chancellor at any time by voting for a new chancellor (constructive vote of no confidence, Article 67 of the Basic Law). There has never been an impeachment against the President so far. Constructive votes of no confidence against the chancellor occurred in 1972 and 1982, with only the second one being successful. The chief executive of Hong Kong can be impeached by the Legislative Council. A motion for investigation, initiated jointly by at least one-fourth of all the legislators charging the Chief Executive with "serious breach of law or dereliction of duty" and refusing to resign, shall first be passed by the council. An independent investigation committee, chaired by the chief justice of the Court of Final Appeal, will then carry out the investigation and report back to the council. If the Council find the evidence sufficient to substantiate the charges, it may pass a motion of impeachment by a two-thirds majority. However, the Legislative Council does not have the power to actually remove the chief executive from office, as the chief executive is appointed by the Central People's Government (State Council of China). The council can only report the result to the Central People's Government for its decision. Article 13 of Hungary's Fundamental Law (constitution) provides for the process of impeaching and removing the president. The president enjoys immunity from criminal prosecution while in office, but may be charged with crimes committed during his term afterwards. Should the president violate the constitution while discharging his duties or commit a willful criminal offense, he may be removed from office. Removal proceedings may be proposed by the concurring recommendation of one-fifth of the 199 members of the country's unicameral Parliament. Parliament votes on the proposal by secret ballot, and if two thirds of all representatives agree, the president is impeached. Once impeached, the president's powers are suspended, and the Constitutional Court decides whether or not the President should be removed from office. The president and judges, including the chief justice of the supreme court and high courts, can be impeached by the parliament before the expiry of the term for violation of the Constitution. Other than impeachment, no other penalty can be given to a president in position for the violation of the Constitution under Article 361 of the constitution. However, a president after his/her removal can be punished for her/his already proven unlawful activity under disrespecting the constitution, etc. No president has faced impeachment proceedings. Hence, the provisions for impeachment have never been tested. The sitting president cannot be charged and needs to step down in order for that to happen. In the Republic of Ireland formal impeachment applies only to the Irish president. Article 12 of the Irish Constitution provides that, unless judged to be "permanently incapacitated" by the Supreme Court, the president can be removed from office only by the houses of the Oireachtas (parliament) and only for the commission of "stated misbehaviour". Either house of the Oireachtas may impeach the president, but only by a resolution approved by a majority of at least two thirds of its total number of members; and a house may not consider a proposal for impeachment unless requested to do so by at least thirty of its number. Where one house impeaches the president, the remaining house either investigates the charge or commissions another body or committee to do so. The investigating house can remove the president if it decides, by at least a two-thirds majority of its members, both that the president is guilty of the charge and that the charge is sufficiently serious as to warrant the president's removal. To date no impeachment of an Irish president has ever taken place. The president holds a largely ceremonial office, the dignity of which is considered important, so it is likely that a president would resign from office long before undergoing formal conviction or impeachment. In Italy, according to Article 90 of the Constitution, the President of Italy can be impeached through a majority vote of the Parliament in joint session for high treason and for attempting to overthrow the Constitution. If impeached, the president of the Republic is then tried by the Constitutional Court integrated with sixteen citizens older than forty chosen by lot from a list compiled by the Parliament every nine years. Italian press and political forces made use of the term "impeachment" for the attempt by some members of parliamentary opposition to initiate the procedure provided for in Article 90 against Presidents Francesco Cossiga (1991), Giorgio Napolitano (2014) and Sergio Mattarella (2018). By Article 78 of the Constitution of Japan, judges can be impeached. The voting method is specified by laws. The National Diet has two organs, namely 裁判官訴追委員会 (Saibankan sotsui iinkai) and 裁判官弾劾裁判所 (Saibankan dangai saibansho), which is established by Article 64 of the Constitution. The former has a role similar to prosecutor and the latter is analogous to Court. Seven judges were removed by them. Members of the Liechtenstein Government can be impeached before the State Court for breaches of the Constitution or of other laws. As a hereditary monarchy the Sovereign Prince cannot be impeached as he "is not subject to the jurisdiction of the courts and does not have legal responsibility". The same is true of any member of the Princely House who exercises the function of head of state should the Prince be temporarily prevented or in preparation for the Succession. In the Republic of Lithuania, the president may be impeached by a three-fifths majority in the Seimas. President Rolandas Paksas was removed from office by impeachment on 6 April 2004 after the Constitutional Court of Lithuania found him guilty of having violated his oath and the constitution. He was the first European head of state to have been impeached. Members of government, representatives of the national assembly (Stortinget) and Supreme Court judges can be impeached for criminal offenses tied to their duties and committed in office, according to the Constitution of 1814, §§ 86 and 87. The procedural rules were modeled after the U.S. rules and are quite similar to them. Impeachment has been used eight times since 1814, last in 1927. Many argue that impeachment has fallen into desuetude. In cases of impeachment, an appointed court (Riksrett) takes effect. Impeachment in the Philippines follows procedures similar to the United States. Under Sections 2 and 3, Article XI, Constitution of the Philippines, the House of Representatives of the Philippines has the exclusive power to initiate all cases of impeachment against the president, vice president, members of the Supreme Court, members of the Constitutional Commissions (Commission on Elections, Civil Service Commission and the Commission on Audit), and the ombudsman. When a third of its membership has endorsed article(s) of impeachment, it is then transmitted to the Senate of the Philippines which tries and decide, as impeachment tribunal, the impeachment case. A main difference from U.S. proceedings, however, is that only one third of House members are required to approve the motion to impeach the president (as opposed to a simple majority of those present and voting in their U.S. counterpart). In the Senate, selected members of the House of Representatives act as the prosecutors and the senators act as judges with the Senate president presiding over the proceedings (the chief justice jointly presides with the Senate president if the president is on trial). Like the United States, to convict the official in question requires that a minimum of two thirds (i.e. 16 of 24 members) of all the members of the Senate vote in favor of conviction. If an impeachment attempt is unsuccessful or the official is acquitted, no new cases can be filed against that impeachable official for at least one full year. President Joseph Estrada was the first official impeached by the House in 2000, but the trial ended prematurely due to outrage over a vote to open an envelope where that motion was narrowly defeated by his allies. Estrada was deposed days later during the 2001 EDSA Revolution. In 2005, 2006, 2007 and 2008, impeachment complaints were filed against President Gloria Macapagal Arroyo, but none of the cases reached the required endorsement of 1⁄3 of the members for transmittal to, and trial by, the Senate. In March 2011, the House of Representatives impeached Ombudsman Merceditas Gutierrez, becoming the second person to be impeached. In April, Gutierrez resigned prior to the Senate's convening as an impeachment court. In December 2011, in what was described as "blitzkrieg fashion", 188 of the 285 members of the House of Representatives voted to transmit the 56-page articles of impeachment against Supreme Court chief justice Renato Corona in his impeachment. To date, three officials had been successfully impeached by the House of Representatives, and two were not convicted. The latter, Chief Justice Renato C. Corona, was convicted on 29 May 2012, by the Senate under Article II of the Articles of Impeachment (for betraying public trust), with 20–3 votes from the Senator Judges. The first impeachment process against Pedro Pablo Kuczynski, then the incumbent President of Peru since 2016, was initiated by the Congress of Peru on 15 December 2017. According to Luis Galarreta, the President of the Congress, the whole process of impeachment could have taken as little as a week to complete. This event was part of the second stage of the political crisis generated by the confrontation between the Government of Pedro Pablo Kuczynski and the Congress, in which the opposition Popular Force has an absolute majority. The impeachment request was rejected by the congress on 21 December 2017, for failing to obtain sufficient votes for the deposition. The president can be impeached by Parliament and is then suspended. A referendum then follows to determine whether the suspended President should be removed from office. President Traian Băsescu was impeached twice by the Parliament: in 2007 and then again in July 2012. A referendum was held on 19 May 2007 and a large majority of the electorate voted against removing the president from office. For the most recent suspension a referendum was held on July 29, 2012; the results were heavily against the president, but the referendum was invalidated due to low turnout. In 1999, members of the State Duma of Russia, led by the Communist Party of the Russian Federation, unsuccessfully attempted to impeach President Boris Yeltsin on charges relating to his role in the 1993 Russian constitutional crisis and launching the First Chechen War (1995–96); efforts to launch impeachment proceedings failed. The Constitution of Singapore allows the impeachment of a sitting president on charges of treason, violation of the Constitution, corruption, or attempting to mislead the Presidential Elections Committee for the purpose of demonstrating eligibility to be elected as president. The prime minister or at least one-quarter of all members of Parliament (MPs) can pass an impeachment motion, which can succeed only if at least half of all MPs (excluding nominated members) vote in favor, whereupon the chief justice of the Supreme Court will appoint a tribunal to investigate allegations against the president. If the tribunal finds the president guilty, or otherwise declares that the president is "permanently incapable of discharging the functions of his office by reason of mental or physical infirmity", Parliament will hold a vote on a resolution to remove the president from office, which requires a three-quarters majority to succeed. No president has ever been removed from office in this fashion. When the Union of South Africa was established in 1910, the only officials who could be impeached (though the term itself was not used) were the chief justice and judges of the Supreme Court of South Africa. The scope was broadened when the country became a republic in 1961, to include the state president. It was further broadened in 1981 to include the new office of vice state president; and in 1994 to include the executive deputy presidents, the public protector and the Auditor-General. Since 1997, members of certain commissions established by the Constitution can also be impeached. The grounds for impeachment, and the procedures to be followed, have changed several times over the years. According to the Article 65(1) of Constitution of South Korea, President, Prime Minister, members of the State Council, heads of Executive Ministries, Justices of the Constitutional Court, judges, members of the National Election Commission, the Chairperson and members of the Board of Audit and Inspection can be impeached by the National Assembly when they violated the Constitution or other statutory duties. By article 65(2) of the Constitution, proposal of impeachment needs simple majority of votes among quorum of one-thirds of the National Assembly. However, exceptionally, impeachment on President of South Korea needs simple majority of votes among quorum of two-thirds of the National Assembly. When impeachment proposal is passed in the National Assembly, it is finally reviewed under jurisdiction the Constitutional Court of Korea, according to article 111(1) of the Constitution. During review of impeachment in the Constitutional Court, the impeached is suspended from exercising power by article 65(3) of the Constitution. Two presidents have been impeached since the establishment of the Republic of Korea in 1948. Roh Moo-hyun in 2004 was impeached by the National Assembly but was overturned by the Constitutional Court. Park Geun-hye in 2016 was impeached by the National Assembly, and the impeachment was confirmed by the Constitutional Court on March 10, 2017. In February 2021, Judge Lim Seong-geun of the Busan High Court was impeached by the National Assembly for meddling in politically sensitive trials, the first ever impeachment of a judge in Korean history. Unlike presidential impeachments, only a simple majority is required to impeach. Judge Lim's term expired before the Constitutional Court could render a verdict, leading the court to dismiss the case. In Turkey, according to the Constitution, the Grand National Assembly may initiate an investigation of the president, the vice president or any member of the Cabinet upon the proposal of simple majority of its total members, and within a period less than a month, the approval of three-fifths of the total members. The investigation would be carried out by a commission of fifteen members of the Assembly, each nominated by the political parties in proportion to their representation therein. The commission would submit its report indicating the outcome of the investigation to the speaker within two months. If the investigation is not completed within this period, the commission's time may be renewed for another month. Within ten days of its submission to the speaker, the report would be distributed to all members of the Assembly, and ten days after its distribution, the report would be discussed on the floor. Upon the approval of two thirds of the total number of the Assembly by secret vote, the person or persons, about whom the investigation was conducted, may be tried before the Constitutional Court. The trial would be finalized within three months, and if not, a one-time additional period of three months shall be granted. The president, about whom an investigation has been initiated, may not call for an election. The president, who is convicted by the Court, would be removed from office. The provision of this article shall also apply to the offenses for which the president allegedly worked during his term of office. In the United Kingdom, in principle, anybody may be prosecuted and tried by the two Houses of Parliament for any crime. The first recorded impeachment is that of William Latimer, 4th Baron Latimer during the Good Parliament of 1376. The latest was that of Henry Dundas, 1st Viscount Melville which started in 1805 and which ended with his acquittal in June 1806. Over the centuries, the procedure has been supplemented by other forms of oversight including select committees, confidence motions, and judicial review, while the privilege of peers to trial only in the House of Lords was abolished in 1948 (see Judicial functions of the House of Lords § Trials), and thus impeachment, which has not kept up with modern norms of democracy or procedural fairness, is generally considered obsolete. In the federal system, Article One of the United States Constitution provides that the House of Representatives has the "sole Power of Impeachment" and the Senate has "the sole Power to try all Impeachments". Article Two provides that "The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors." In the United States, impeachment is the first of two stages; an official may be impeached by a majority vote of the House, but conviction and removal from office in the Senate requires "the concurrence of two thirds of the members present". Impeachment is analogous to an indictment. According to the House practice manual, "Impeachment is a constitutional remedy to address serious offenses against the system of government. It is the first step in a remedial process—that of removal from public office and possible disqualification from holding further office. The purpose of impeachment is not punishment; rather, its function is primarily to maintain constitutional government." Impeachment may be understood as a unique process involving both political and legal elements. The Constitution provides that "Judgment in Cases of Impeachment shall not extend further than to removal from Office, and disqualification to hold and enjoy any Office of honor, Trust or Profit under the United States: but the Party convicted shall nevertheless be liable and subject to Indictment, Trial, Judgment and Punishment, according to Law." It is generally accepted that "a former President may be prosecuted for crimes of which he was acquitted by the Senate". The U.S. House of Representatives has impeached an official 21 times since 1789: four times for presidents, 15 times for federal judges, once for a Cabinet secretary, and once for a senator. Of the 21, the Senate voted to remove 8 (all federal judges) from office. The four impeachments of presidents were: Andrew Johnson in 1868, Bill Clinton in 1998, and Donald Trump in 2019 and again in 2021. All four impeachments were followed by acquittal in the Senate. An impeachment process was also commenced against Richard Nixon, but he resigned in 1974 to avoid an impeachment vote. Almost all state constitutions set forth parallel impeachment procedures for state governments, allowing the state legislature to impeach officials of the state government. From 1789 through 2008, 14 governors have been impeached (including two who were impeached twice), of whom seven governors were convicted.
[ { "paragraph_id": 0, "text": "Impeachment is a process by which a legislative body or other legally constituted tribunal initiates charges against a public official for misconduct. It may be understood as a unique process involving both political and legal elements.", "title": "" }, { "paragraph_id": 1, "text": "In Europe and Latin America, impeachment tends to be confined to ministerial officials as the unique nature of their positions may place ministers beyond the reach of the law to prosecute, or their misconduct is not codified into law as an offense except through the unique expectations of their high office. Both \"peers and commoners\" have been subject to the process, however. From 1990 to 2020, there have been at least 272 impeachment charges against 132 different heads of state in 63 countries. Most democracies (with the notable exception of the United States) involve the courts (often a national constitutional court) in some way.", "title": "" }, { "paragraph_id": 2, "text": "In Latin America, which includes almost 40% of the world's presidential systems, ten presidents from seven countries were removed from office by their national legislatures via impeachments or declarations of incapacity between 1978 and 2019.", "title": "" }, { "paragraph_id": 3, "text": "National legislations differ regarding both the consequences and definition of impeachment, but the intent is nearly always to expeditiously vacate the office. In most nations the process begins in the lower house of a bicameral assembly who bring charges of misconduct, then the upper house administers an impeachment trial and sentencing. Most commonly, an official is considered impeached after the house votes to accept the charges, and impeachment itself does not remove the official from office.", "title": "" }, { "paragraph_id": 4, "text": "Because impeachment involves a departure from the normal constitutional procedures by which individuals achieve high office (election, ratification, or appointment) and because it generally requires a supermajority, they are usually reserved for those deemed to have committed serious abuses of their office. In the United States, for example, impeachment at the federal level is limited to those who may have committed \"Treason, Bribery, or other high crimes and misdemeanors\"—the latter phrase referring to offenses against the government or the constitution, grave abuses of power, violations of the public trust, or other political crimes, even if not indictable criminal offenses. Under the United States Constitution, the House of Representatives has the sole power of impeachments while the Senate has the sole power to try impeachments (i.e., to acquit or convict); the validity of an impeachment trial is a political question that is nonjusticiable (i.e.., is not reviewable by the courts). In the United States, impeachment is a remedial rather than penal process, intended to \"effectively 'maintain constitutional government' by removing individuals unfit for office\"; persons subject to impeachment and removal remain \"liable and subject to Indictment, Trial, Judgment and Punishment, according to Law.\"", "title": "" }, { "paragraph_id": 5, "text": "Impeachment is provided for in the constitutional laws of many countries including Brazil, France, India, Ireland, the Philippines, Russia, South Korea, and the United States. It is distinct from the motion of no confidence procedure available in some countries whereby a motion of censure can be used to remove a government and its ministers from office. Such a procedure is not applicable in countries with presidential forms of government like the United States.", "title": "" }, { "paragraph_id": 6, "text": "The word \"impeachment\" likely derives from Old French empeechier from Latin word impedīre expressing the idea of catching or ensnaring by the 'foot' (pes, pedis), and has analogues in the modern French verb empêcher (to prevent) and the modern English impede. Medieval popular etymology also associated it (wrongly) with derivations from the Latin impetere (to attack).", "title": "Etymology and history" }, { "paragraph_id": 7, "text": "The process was first used by the English \"Good Parliament\" against William Latimer, 4th Baron Latimer in the second half of the 14th century. Following the English example, the constitutions of Virginia (1776), Massachusetts (1780) and other states thereafter adopted the impeachment mechanism, but they restricted the punishment to removal of the official from office.", "title": "Etymology and history" }, { "paragraph_id": 8, "text": "In West Africa, kings of the Ashanti Empire who violated any of the oaths taken during their enstoolment were destooled by Kingmakers. For instance, if a king punished citizens arbitrarily or was exposed to be corrupt, he would be destooled. Destoolment entailed Kingmakers removing the sandals of the king and bumping his buttocks on the ground three times. Once destooled from office, his sanctity and thus reverence were lost, as he could not exercise any powers he had as king; this included Chief administrator, Judge, and Military Commander. The now previous king was disposed of the Stool, swords and other regalia which symbolized his office and authority. He also lost the position as custodian of the land. However, despite being destooled from office, the king remained a member of the royal family from which he was elected.", "title": "Etymology and history" }, { "paragraph_id": 9, "text": "In Brazil, as in most other Latin American countries, \"impeachment\" refers to the definitive removal from office. The president of Brazil may be provisionally removed from office by the Chamber of Deputies and then tried and definitely removed from office by the Federal Senate. The Brazilian Constitution requires that two-thirds of the Deputies vote in favor of the opening of the impeachment process of the President and that two-thirds of the Senators vote for impeachment. State governors and municipal mayors can also be impeached by the respective legislative bodies. Article 2 of Law no. 1.079, from 10 April 1950, or \"The Law of Impeachment\", states that \"The crimes defined in this law, even when simply attempted, are subject to the penalty of loss of office, with disqualification for up to five years for the exercise of any public function, to be imposed by the Federal Senate in proceedings against the President of the Republic, Ministers of State, Ministers of the Supreme Federal Tribunal, or the Attorney General.\"", "title": "In various jurisdictions" }, { "paragraph_id": 10, "text": "Initiation: An accusation of a responsibility crime against the President may be brought by any Brazilian citizen; however, the President of the Chamber of Deputies holds prerogative to accept the charge, which if accepted will be read at the next session and reported to the President of the Republic.", "title": "In various jurisdictions" }, { "paragraph_id": 11, "text": "Extraordinary Committee: An extraordinary committee is established, consisting of members from each political party in proportion to their party's membership. The committee is responsible for assessing the need for impeachment proceedings. The President is given ten parliamentary sessions to present their defense. Following this, two legislative sessions are held to allow for the formulation of a legal opinion by a rapporteur regarding whether or not impeachment proceedings should be initiated and brought to trial in the Senate.", "title": "In various jurisdictions" }, { "paragraph_id": 12, "text": "The rapporteur's opinion is subject to a vote within the committee. If the majority accepts the rapporteur's opinion, it is deemed adopted. However, if the majority rejects the rapporteur's opinion, the committee adopts an alternative opinion proposed by the majority. For instance, if the rapporteur recommends against impeachment but fails to secure majority support, the committee will adopt the opinion to proceed with impeachment. Conversely, if the rapporteur advises impeachment but does not obtain majority approval, the committee will adopt the opinion not to impeach.", "title": "In various jurisdictions" }, { "paragraph_id": 13, "text": "If the committee vote is successful, the rapporteur's opinion is considered adopted, thereby determining the course of action regarding impeachment.", "title": "In various jurisdictions" }, { "paragraph_id": 14, "text": "Chamber of Deputies: The Chamber issues a call-out vote to accept the opinion of the Committee, requiring a supermajority of two thirds in favor of an impeachment opinion (or a supermajority of two thirds against a dismissal opinion) of the Committee, in order to authorize the Senate impeachment proceedings. The President is suspended (provisionally removed) from office as soon as the Senate receives and accepts from the Chamber of Deputies the impeachment charges and decides to proceed with a trial.", "title": "In various jurisdictions" }, { "paragraph_id": 15, "text": "The Senate: The process in the Senate had been historically lacking in procedural guidance until 1992, when the Senate published in the Official Diary of the Union the step-by-step procedure of the Senate's impeachment process, which involves the formation of another special committee and closely resembles the lower house process, with time constraints imposed on the steps taken. The committee's opinion must be presented within 10 days, after which it is put to a call-out vote at the next session. The vote must proceed within a single session; the vote on President Rousseff took over 20 hours. A simple majority vote in the Senate begins formal deliberation on the complaint, immediately suspends the President from office, installs the Vice President as acting president, and begins a 20-day period for written defense as well as up to 180-days for the trial. In the event the trial proceeds slowly and exceeds 180 days, the Brazilian Constitution determines that the President is entitled to return and stay provisionally in office until the trial comes to its decision.", "title": "In various jurisdictions" }, { "paragraph_id": 16, "text": "Senate plenary deliberation: The committee interrogates the accused or their counsel, from which they have a right to abstain, and also a probative session which guarantees the accused rights to contradiction, or audiatur et altera pars, allowing access to the courts and due process of law under Article 5 of the constitution. The accused has 15 days to present written arguments in defense and answer to the evidence gathered, and then the committee shall issue an opinion on the merits within ten days. The entire package is published for each senator before a single plenary session issues a call-out vote, which shall proceed to trial on a simple majority and close the case otherwise.", "title": "In various jurisdictions" }, { "paragraph_id": 17, "text": "Senate trial: A hearing for the complainant and the accused convenes within 48 hours of notification from deliberation, from which a trial is scheduled by the president of the Supreme Court no less than ten days after the hearing. The senators sit as judges, while witnesses are interrogated and cross-examined; all questions must be presented to the president of the Supreme Court, who, as prescribed in the Constitution, presides over the trial. The president of the Supreme Court allots time for debate and rebuttal, after which time the parties leave the chamber and the senators deliberate on the indictment. The President of the Supreme Court reads the summary of the grounds, the charges, the defense and the evidence to the Senate. The senators in turn issue their judgement. On conviction by a supermajority of two thirds, the president of the Supreme Court pronounces the sentence and the accused is immediately notified. If there is no supermajority for conviction, the accused is acquitted.", "title": "In various jurisdictions" }, { "paragraph_id": 18, "text": "Upon conviction, the officeholder has his or her political rights revoked for eight years, which bars them from running for any office during that time.", "title": "In various jurisdictions" }, { "paragraph_id": 19, "text": "Fernando Collor de Mello, the 32nd President of Brazil, resigned in 1992 amidst impeachment proceedings. Despite his resignation, the Senate nonetheless voted to convict him and bar him from holding any office for eight years, due to evidence of bribery and misappropriation.", "title": "In various jurisdictions" }, { "paragraph_id": 20, "text": "In 2016, the Chamber of Deputies initiated an impeachment case against President Dilma Rousseff on allegations of budgetary mismanagement, a crime of responsibility under the Constitution. On 12 May 2016, after 20 hours of deliberation, the admissibility of the accusation was approved by the Senate with 55 votes in favor and 22 against (an absolute majority would have been sufficient for this step) and Vice President Michel Temer was notified to assume the duties of the President pending trial. On August 31, 61 senators voted in favor of impeachment and 20 voted against it, thus achieving the 2⁄3 majority needed for Rousseff's definitive removal. A vote to disqualify her for five years was taken and failed (in spite of the Constitution not separating disqualification from removal) having less than two thirds in favor.", "title": "In various jurisdictions" }, { "paragraph_id": 21, "text": "The process of impeaching the president of Croatia can be initiated by a two-thirds majority vote in favor in the Sabor and is thereafter referred to the Constitutional Court, which must accept such a proposal with a two-thirds majority vote in favor in order for the president to be removed from office. This has never occurred in the history of the Republic of Croatia. In case of a successful impeachment motion a president's constitutional term of five years would be terminated and an election called within 60 days of the vacancy occurring. During the period of vacancy the presidential powers and duties would be carried out by the speaker of the Croatian Parliament in his/her capacity as Acting President of the Republic.", "title": "In various jurisdictions" }, { "paragraph_id": 22, "text": "In 2013, the constitution was changed. Since 2013, the process can be started by at least three-fifths of present senators, and must be approved by at least three-fifths of all members of the Chamber of Deputies within three months. Also, the President can be impeached for high treason (newly defined in the Constitution) or any serious infringement of the Constitution.", "title": "In various jurisdictions" }, { "paragraph_id": 23, "text": "The process starts in the Senate of the Czech Republic which has the right to only impeach the president. After the approval by the Chamber of Deputies, the case is passed to the Constitutional Court of the Czech Republic, which has to decide the verdict against the president. If the Court finds the President guilty, then the President is removed from office and is permanently barred from being elected President of the Czech Republic again.", "title": "In various jurisdictions" }, { "paragraph_id": 24, "text": "No Czech president has ever been impeached, though members of the Senate sought to impeach President Václav Klaus in 2013. This case was dismissed by the court, which reasoned that his mandate had expired. The Senate also proposed to impeach president Miloš Zeman in 2019 but the Chamber of Deputies did not vote on the issue in time and thus the case did not even proceed to the Court.", "title": "In various jurisdictions" }, { "paragraph_id": 25, "text": "In Denmark the possibility for current and former ministers being impeached was established with the Danish Constitution of 1849. Unlike many other countries Denmark does not have a Constitutional Court who would normally handle these types of cases. Instead Denmark has a special Court of Impeachment (In Danish: Rigsretten) which is called upon every time a current and former minister have been impeached. The role of the Impeachment Court is to process and deliver judgments against current and former ministers who are accused of unlawful conduct in office. The legal content of ministerial responsibility is laid down in the Ministerial Accountability Act which has its background in section 13 of the Danish Constitution, according to which the ministers' accountability is determined in more detail by law. In Denmark the normal practice in terms of impeachment cases is that it needs to be brought up in the Danish Parliament (Folketing) first for debate between the different members and parties in the parliament. After the debate the members of the Danish Parliament vote on whether a current or former minister needs to be impeached. If there is a majority in the Danish Parliament for an impeachment case against a current or former minister, an Impeachment Court is called into session. In Denmark the Impeachment Court consists of up to 15 Supreme Court judges and 15 parliament members appointed by the Danish Parliament. The members of the Impeachment Court in Denmark serve a six-year term in this position.", "title": "In various jurisdictions" }, { "paragraph_id": 26, "text": "In 1995 the former Minister of Justice Erik Ninn-Hansen from the Conservative People's Party was impeached in connection with the Tamil Case. The case was centered around the illegal processing of family reunification applications. From September 1987 to January 1989 applications for family reunification of Tamil refugees from civil war-torn Sri Lanka were put on hold in violation of Danish and International law. On 22 June 1995, Ninn-Hansen was found guilty of violating paragraph five subsection one of the Danish Ministerial Responsibility Act which says: A minister is punished if he intentionally or through gross negligence neglects the duties incumbent on him under the constitution or legislation in general or according to the nature of his post. A majority of the judges in that impeachment case voted for former Minister of Justice Erik Ninn-Hansen to receive a suspended sentence of four months with one year of probation. The reason why the sentence was made suspended was especially in relation to Ninn-Hansen's personal circumstances, in particular, his health and age – Ninn-Hansen was 73 years old when the sentence was handed down. After the verdict, Ninn-Hansen complained to the European Court of Human Rights and complained, among other things, that the Court of Impeachment was not impartial. The European Court of Human Rights dismissed the complaint on 18 May 1999. As a direct result and consequence of this case, the Conservative-led government and Prime Minister at that time Poul Schlüter was forced to step down from power.", "title": "In various jurisdictions" }, { "paragraph_id": 27, "text": "In February 2021 the former Minister for Immigration and Integration Inger Støjberg at that time member of the Danish Liberal Party Venstre was impeached when it was discovered that she had possibly against both Danish and International law tried to separate couples in refugee centres in Denmark, as the wives of the couples were under legal age. According to a commission report Inger Støjberg had also lied in the Danish Parliament and failed to report relevant details to the Parliamentary Ombudsman The decision to initiate an impeachment case was adopted by the Danish Parliament with a 141–30 vote and decision (In Denmark 90 members of the parliament need to vote for impeachment before it can be implemented). On 13 December 2021 former Minister for Immigration and Integration Inger Støjberg was convicted by the special Court of Impeachment of separating asylum seeker families illegally according to Danish and international law and sentenced to 60 days in prison. The majority of the judges in the special Court of Impeachment (25 out of 26 judges) found that it had been proven that Inger Støjberg on 10 February 2016 decided that an accommodation scheme should apply without the possibility of exceptions, so that all asylum-seeking spouses and cohabiting couples where one was a minor aged 15–17, had to be separated and accommodated separately in separate asylum centers. On 21 December, a majority in the Folketing voted that the sentence means that she is no longer worthy of sitting in the Folketing and she therefore immediately lost her seat.", "title": "In various jurisdictions" }, { "paragraph_id": 28, "text": "In France the comparable procedure is called destitution. The president of France can be impeached by the French Parliament for willfully violating the Constitution or the national laws. The process of impeachment is written in the 68th article of the French Constitution. A group of senators or a group of members of the National Assembly can begin the process. Then, both the National Assembly and the Senate must acknowledge the impeachment. After the upper and lower houses' agreement, they unite to form the High Court. Finally, the High Court must decide to declare the impeachment of the president of France—or not.", "title": "In various jurisdictions" }, { "paragraph_id": 29, "text": "The federal president of Germany can be impeached both by the Bundestag and by the Bundesrat for willfully violating federal law. Once the Bundestag or the Bundesrat impeaches the president, the Federal Constitutional Court decides whether the President is guilty as charged and, if this is the case, whether to remove him or her from office. The Federal Constitutional Court also has the power to remove federal judges from office for willfully violating core principles of the federal constitution or a state constitution. The impeachment procedure is regulated in Article 61 of the Basic Law for the Federal Republic of Germany.", "title": "In various jurisdictions" }, { "paragraph_id": 30, "text": "There is no formal impeachment process for the chancellor of Germany; however, the Bundestag can replace the chancellor at any time by voting for a new chancellor (constructive vote of no confidence, Article 67 of the Basic Law).", "title": "In various jurisdictions" }, { "paragraph_id": 31, "text": "There has never been an impeachment against the President so far. Constructive votes of no confidence against the chancellor occurred in 1972 and 1982, with only the second one being successful.", "title": "In various jurisdictions" }, { "paragraph_id": 32, "text": "The chief executive of Hong Kong can be impeached by the Legislative Council. A motion for investigation, initiated jointly by at least one-fourth of all the legislators charging the Chief Executive with \"serious breach of law or dereliction of duty\" and refusing to resign, shall first be passed by the council. An independent investigation committee, chaired by the chief justice of the Court of Final Appeal, will then carry out the investigation and report back to the council. If the Council find the evidence sufficient to substantiate the charges, it may pass a motion of impeachment by a two-thirds majority.", "title": "In various jurisdictions" }, { "paragraph_id": 33, "text": "However, the Legislative Council does not have the power to actually remove the chief executive from office, as the chief executive is appointed by the Central People's Government (State Council of China). The council can only report the result to the Central People's Government for its decision.", "title": "In various jurisdictions" }, { "paragraph_id": 34, "text": "Article 13 of Hungary's Fundamental Law (constitution) provides for the process of impeaching and removing the president. The president enjoys immunity from criminal prosecution while in office, but may be charged with crimes committed during his term afterwards. Should the president violate the constitution while discharging his duties or commit a willful criminal offense, he may be removed from office. Removal proceedings may be proposed by the concurring recommendation of one-fifth of the 199 members of the country's unicameral Parliament. Parliament votes on the proposal by secret ballot, and if two thirds of all representatives agree, the president is impeached. Once impeached, the president's powers are suspended, and the Constitutional Court decides whether or not the President should be removed from office.", "title": "In various jurisdictions" }, { "paragraph_id": 35, "text": "The president and judges, including the chief justice of the supreme court and high courts, can be impeached by the parliament before the expiry of the term for violation of the Constitution. Other than impeachment, no other penalty can be given to a president in position for the violation of the Constitution under Article 361 of the constitution. However, a president after his/her removal can be punished for her/his already proven unlawful activity under disrespecting the constitution, etc. No president has faced impeachment proceedings. Hence, the provisions for impeachment have never been tested. The sitting president cannot be charged and needs to step down in order for that to happen.", "title": "In various jurisdictions" }, { "paragraph_id": 36, "text": "", "title": "In various jurisdictions" }, { "paragraph_id": 37, "text": "In the Republic of Ireland formal impeachment applies only to the Irish president. Article 12 of the Irish Constitution provides that, unless judged to be \"permanently incapacitated\" by the Supreme Court, the president can be removed from office only by the houses of the Oireachtas (parliament) and only for the commission of \"stated misbehaviour\". Either house of the Oireachtas may impeach the president, but only by a resolution approved by a majority of at least two thirds of its total number of members; and a house may not consider a proposal for impeachment unless requested to do so by at least thirty of its number.", "title": "In various jurisdictions" }, { "paragraph_id": 38, "text": "Where one house impeaches the president, the remaining house either investigates the charge or commissions another body or committee to do so. The investigating house can remove the president if it decides, by at least a two-thirds majority of its members, both that the president is guilty of the charge and that the charge is sufficiently serious as to warrant the president's removal. To date no impeachment of an Irish president has ever taken place. The president holds a largely ceremonial office, the dignity of which is considered important, so it is likely that a president would resign from office long before undergoing formal conviction or impeachment.", "title": "In various jurisdictions" }, { "paragraph_id": 39, "text": "In Italy, according to Article 90 of the Constitution, the President of Italy can be impeached through a majority vote of the Parliament in joint session for high treason and for attempting to overthrow the Constitution. If impeached, the president of the Republic is then tried by the Constitutional Court integrated with sixteen citizens older than forty chosen by lot from a list compiled by the Parliament every nine years.", "title": "In various jurisdictions" }, { "paragraph_id": 40, "text": "Italian press and political forces made use of the term \"impeachment\" for the attempt by some members of parliamentary opposition to initiate the procedure provided for in Article 90 against Presidents Francesco Cossiga (1991), Giorgio Napolitano (2014) and Sergio Mattarella (2018).", "title": "In various jurisdictions" }, { "paragraph_id": 41, "text": "By Article 78 of the Constitution of Japan, judges can be impeached. The voting method is specified by laws. The National Diet has two organs, namely 裁判官訴追委員会 (Saibankan sotsui iinkai) and 裁判官弾劾裁判所 (Saibankan dangai saibansho), which is established by Article 64 of the Constitution. The former has a role similar to prosecutor and the latter is analogous to Court. Seven judges were removed by them.", "title": "In various jurisdictions" }, { "paragraph_id": 42, "text": "Members of the Liechtenstein Government can be impeached before the State Court for breaches of the Constitution or of other laws. As a hereditary monarchy the Sovereign Prince cannot be impeached as he \"is not subject to the jurisdiction of the courts and does not have legal responsibility\". The same is true of any member of the Princely House who exercises the function of head of state should the Prince be temporarily prevented or in preparation for the Succession.", "title": "In various jurisdictions" }, { "paragraph_id": 43, "text": "In the Republic of Lithuania, the president may be impeached by a three-fifths majority in the Seimas. President Rolandas Paksas was removed from office by impeachment on 6 April 2004 after the Constitutional Court of Lithuania found him guilty of having violated his oath and the constitution. He was the first European head of state to have been impeached.", "title": "In various jurisdictions" }, { "paragraph_id": 44, "text": "Members of government, representatives of the national assembly (Stortinget) and Supreme Court judges can be impeached for criminal offenses tied to their duties and committed in office, according to the Constitution of 1814, §§ 86 and 87. The procedural rules were modeled after the U.S. rules and are quite similar to them. Impeachment has been used eight times since 1814, last in 1927. Many argue that impeachment has fallen into desuetude. In cases of impeachment, an appointed court (Riksrett) takes effect.", "title": "In various jurisdictions" }, { "paragraph_id": 45, "text": "Impeachment in the Philippines follows procedures similar to the United States. Under Sections 2 and 3, Article XI, Constitution of the Philippines, the House of Representatives of the Philippines has the exclusive power to initiate all cases of impeachment against the president, vice president, members of the Supreme Court, members of the Constitutional Commissions (Commission on Elections, Civil Service Commission and the Commission on Audit), and the ombudsman. When a third of its membership has endorsed article(s) of impeachment, it is then transmitted to the Senate of the Philippines which tries and decide, as impeachment tribunal, the impeachment case.", "title": "In various jurisdictions" }, { "paragraph_id": 46, "text": "A main difference from U.S. proceedings, however, is that only one third of House members are required to approve the motion to impeach the president (as opposed to a simple majority of those present and voting in their U.S. counterpart). In the Senate, selected members of the House of Representatives act as the prosecutors and the senators act as judges with the Senate president presiding over the proceedings (the chief justice jointly presides with the Senate president if the president is on trial). Like the United States, to convict the official in question requires that a minimum of two thirds (i.e. 16 of 24 members) of all the members of the Senate vote in favor of conviction. If an impeachment attempt is unsuccessful or the official is acquitted, no new cases can be filed against that impeachable official for at least one full year.", "title": "In various jurisdictions" }, { "paragraph_id": 47, "text": "President Joseph Estrada was the first official impeached by the House in 2000, but the trial ended prematurely due to outrage over a vote to open an envelope where that motion was narrowly defeated by his allies. Estrada was deposed days later during the 2001 EDSA Revolution.", "title": "In various jurisdictions" }, { "paragraph_id": 48, "text": "In 2005, 2006, 2007 and 2008, impeachment complaints were filed against President Gloria Macapagal Arroyo, but none of the cases reached the required endorsement of 1⁄3 of the members for transmittal to, and trial by, the Senate.", "title": "In various jurisdictions" }, { "paragraph_id": 49, "text": "In March 2011, the House of Representatives impeached Ombudsman Merceditas Gutierrez, becoming the second person to be impeached. In April, Gutierrez resigned prior to the Senate's convening as an impeachment court.", "title": "In various jurisdictions" }, { "paragraph_id": 50, "text": "In December 2011, in what was described as \"blitzkrieg fashion\", 188 of the 285 members of the House of Representatives voted to transmit the 56-page articles of impeachment against Supreme Court chief justice Renato Corona in his impeachment.", "title": "In various jurisdictions" }, { "paragraph_id": 51, "text": "To date, three officials had been successfully impeached by the House of Representatives, and two were not convicted. The latter, Chief Justice Renato C. Corona, was convicted on 29 May 2012, by the Senate under Article II of the Articles of Impeachment (for betraying public trust), with 20–3 votes from the Senator Judges.", "title": "In various jurisdictions" }, { "paragraph_id": 52, "text": "The first impeachment process against Pedro Pablo Kuczynski, then the incumbent President of Peru since 2016, was initiated by the Congress of Peru on 15 December 2017. According to Luis Galarreta, the President of the Congress, the whole process of impeachment could have taken as little as a week to complete. This event was part of the second stage of the political crisis generated by the confrontation between the Government of Pedro Pablo Kuczynski and the Congress, in which the opposition Popular Force has an absolute majority. The impeachment request was rejected by the congress on 21 December 2017, for failing to obtain sufficient votes for the deposition.", "title": "In various jurisdictions" }, { "paragraph_id": 53, "text": "The president can be impeached by Parliament and is then suspended. A referendum then follows to determine whether the suspended President should be removed from office. President Traian Băsescu was impeached twice by the Parliament: in 2007 and then again in July 2012. A referendum was held on 19 May 2007 and a large majority of the electorate voted against removing the president from office. For the most recent suspension a referendum was held on July 29, 2012; the results were heavily against the president, but the referendum was invalidated due to low turnout.", "title": "In various jurisdictions" }, { "paragraph_id": 54, "text": "In 1999, members of the State Duma of Russia, led by the Communist Party of the Russian Federation, unsuccessfully attempted to impeach President Boris Yeltsin on charges relating to his role in the 1993 Russian constitutional crisis and launching the First Chechen War (1995–96); efforts to launch impeachment proceedings failed.", "title": "In various jurisdictions" }, { "paragraph_id": 55, "text": "The Constitution of Singapore allows the impeachment of a sitting president on charges of treason, violation of the Constitution, corruption, or attempting to mislead the Presidential Elections Committee for the purpose of demonstrating eligibility to be elected as president. The prime minister or at least one-quarter of all members of Parliament (MPs) can pass an impeachment motion, which can succeed only if at least half of all MPs (excluding nominated members) vote in favor, whereupon the chief justice of the Supreme Court will appoint a tribunal to investigate allegations against the president. If the tribunal finds the president guilty, or otherwise declares that the president is \"permanently incapable of discharging the functions of his office by reason of mental or physical infirmity\", Parliament will hold a vote on a resolution to remove the president from office, which requires a three-quarters majority to succeed. No president has ever been removed from office in this fashion.", "title": "In various jurisdictions" }, { "paragraph_id": 56, "text": "When the Union of South Africa was established in 1910, the only officials who could be impeached (though the term itself was not used) were the chief justice and judges of the Supreme Court of South Africa. The scope was broadened when the country became a republic in 1961, to include the state president. It was further broadened in 1981 to include the new office of vice state president; and in 1994 to include the executive deputy presidents, the public protector and the Auditor-General. Since 1997, members of certain commissions established by the Constitution can also be impeached. The grounds for impeachment, and the procedures to be followed, have changed several times over the years.", "title": "In various jurisdictions" }, { "paragraph_id": 57, "text": "According to the Article 65(1) of Constitution of South Korea, President, Prime Minister, members of the State Council, heads of Executive Ministries, Justices of the Constitutional Court, judges, members of the National Election Commission, the Chairperson and members of the Board of Audit and Inspection can be impeached by the National Assembly when they violated the Constitution or other statutory duties. By article 65(2) of the Constitution, proposal of impeachment needs simple majority of votes among quorum of one-thirds of the National Assembly. However, exceptionally, impeachment on President of South Korea needs simple majority of votes among quorum of two-thirds of the National Assembly. When impeachment proposal is passed in the National Assembly, it is finally reviewed under jurisdiction the Constitutional Court of Korea, according to article 111(1) of the Constitution. During review of impeachment in the Constitutional Court, the impeached is suspended from exercising power by article 65(3) of the Constitution.", "title": "In various jurisdictions" }, { "paragraph_id": 58, "text": "Two presidents have been impeached since the establishment of the Republic of Korea in 1948. Roh Moo-hyun in 2004 was impeached by the National Assembly but was overturned by the Constitutional Court. Park Geun-hye in 2016 was impeached by the National Assembly, and the impeachment was confirmed by the Constitutional Court on March 10, 2017.", "title": "In various jurisdictions" }, { "paragraph_id": 59, "text": "In February 2021, Judge Lim Seong-geun of the Busan High Court was impeached by the National Assembly for meddling in politically sensitive trials, the first ever impeachment of a judge in Korean history. Unlike presidential impeachments, only a simple majority is required to impeach. Judge Lim's term expired before the Constitutional Court could render a verdict, leading the court to dismiss the case.", "title": "In various jurisdictions" }, { "paragraph_id": 60, "text": "In Turkey, according to the Constitution, the Grand National Assembly may initiate an investigation of the president, the vice president or any member of the Cabinet upon the proposal of simple majority of its total members, and within a period less than a month, the approval of three-fifths of the total members. The investigation would be carried out by a commission of fifteen members of the Assembly, each nominated by the political parties in proportion to their representation therein. The commission would submit its report indicating the outcome of the investigation to the speaker within two months. If the investigation is not completed within this period, the commission's time may be renewed for another month. Within ten days of its submission to the speaker, the report would be distributed to all members of the Assembly, and ten days after its distribution, the report would be discussed on the floor. Upon the approval of two thirds of the total number of the Assembly by secret vote, the person or persons, about whom the investigation was conducted, may be tried before the Constitutional Court. The trial would be finalized within three months, and if not, a one-time additional period of three months shall be granted. The president, about whom an investigation has been initiated, may not call for an election. The president, who is convicted by the Court, would be removed from office.", "title": "In various jurisdictions" }, { "paragraph_id": 61, "text": "The provision of this article shall also apply to the offenses for which the president allegedly worked during his term of office.", "title": "In various jurisdictions" }, { "paragraph_id": 62, "text": "In the United Kingdom, in principle, anybody may be prosecuted and tried by the two Houses of Parliament for any crime. The first recorded impeachment is that of William Latimer, 4th Baron Latimer during the Good Parliament of 1376. The latest was that of Henry Dundas, 1st Viscount Melville which started in 1805 and which ended with his acquittal in June 1806. Over the centuries, the procedure has been supplemented by other forms of oversight including select committees, confidence motions, and judicial review, while the privilege of peers to trial only in the House of Lords was abolished in 1948 (see Judicial functions of the House of Lords § Trials), and thus impeachment, which has not kept up with modern norms of democracy or procedural fairness, is generally considered obsolete.", "title": "In various jurisdictions" }, { "paragraph_id": 63, "text": "In the federal system, Article One of the United States Constitution provides that the House of Representatives has the \"sole Power of Impeachment\" and the Senate has \"the sole Power to try all Impeachments\". Article Two provides that \"The President, Vice President and all civil Officers of the United States, shall be removed from Office on Impeachment for, and Conviction of, Treason, Bribery, or other high Crimes and Misdemeanors.\" In the United States, impeachment is the first of two stages; an official may be impeached by a majority vote of the House, but conviction and removal from office in the Senate requires \"the concurrence of two thirds of the members present\". Impeachment is analogous to an indictment.", "title": "In various jurisdictions" }, { "paragraph_id": 64, "text": "According to the House practice manual, \"Impeachment is a constitutional remedy to address serious offenses against the system of government. It is the first step in a remedial process—that of removal from public office and possible disqualification from holding further office. The purpose of impeachment is not punishment; rather, its function is primarily to maintain constitutional government.\" Impeachment may be understood as a unique process involving both political and legal elements. The Constitution provides that \"Judgment in Cases of Impeachment shall not extend further than to removal from Office, and disqualification to hold and enjoy any Office of honor, Trust or Profit under the United States: but the Party convicted shall nevertheless be liable and subject to Indictment, Trial, Judgment and Punishment, according to Law.\" It is generally accepted that \"a former President may be prosecuted for crimes of which he was acquitted by the Senate\".", "title": "In various jurisdictions" }, { "paragraph_id": 65, "text": "The U.S. House of Representatives has impeached an official 21 times since 1789: four times for presidents, 15 times for federal judges, once for a Cabinet secretary, and once for a senator. Of the 21, the Senate voted to remove 8 (all federal judges) from office. The four impeachments of presidents were: Andrew Johnson in 1868, Bill Clinton in 1998, and Donald Trump in 2019 and again in 2021. All four impeachments were followed by acquittal in the Senate. An impeachment process was also commenced against Richard Nixon, but he resigned in 1974 to avoid an impeachment vote.", "title": "In various jurisdictions" }, { "paragraph_id": 66, "text": "Almost all state constitutions set forth parallel impeachment procedures for state governments, allowing the state legislature to impeach officials of the state government. From 1789 through 2008, 14 governors have been impeached (including two who were impeached twice), of whom seven governors were convicted.", "title": "In various jurisdictions" } ]
Impeachment is a process by which a legislative body or other legally constituted tribunal initiates charges against a public official for misconduct. It may be understood as a unique process involving both political and legal elements. In Europe and Latin America, impeachment tends to be confined to ministerial officials as the unique nature of their positions may place ministers beyond the reach of the law to prosecute, or their misconduct is not codified into law as an offense except through the unique expectations of their high office. Both "peers and commoners" have been subject to the process, however. From 1990 to 2020, there have been at least 272 impeachment charges against 132 different heads of state in 63 countries. Most democracies involve the courts in some way. In Latin America, which includes almost 40% of the world's presidential systems, ten presidents from seven countries were removed from office by their national legislatures via impeachments or declarations of incapacity between 1978 and 2019. National legislations differ regarding both the consequences and definition of impeachment, but the intent is nearly always to expeditiously vacate the office. In most nations the process begins in the lower house of a bicameral assembly who bring charges of misconduct, then the upper house administers an impeachment trial and sentencing. Most commonly, an official is considered impeached after the house votes to accept the charges, and impeachment itself does not remove the official from office. Because impeachment involves a departure from the normal constitutional procedures by which individuals achieve high office and because it generally requires a supermajority, they are usually reserved for those deemed to have committed serious abuses of their office. In the United States, for example, impeachment at the federal level is limited to those who may have committed "Treason, Bribery, or other high crimes and misdemeanors"—the latter phrase referring to offenses against the government or the constitution, grave abuses of power, violations of the public trust, or other political crimes, even if not indictable criminal offenses. Under the United States Constitution, the House of Representatives has the sole power of impeachments while the Senate has the sole power to try impeachments; the validity of an impeachment trial is a political question that is nonjusticiable. In the United States, impeachment is a remedial rather than penal process, intended to "effectively 'maintain constitutional government' by removing individuals unfit for office"; persons subject to impeachment and removal remain "liable and subject to Indictment, Trial, Judgment and Punishment, according to Law." Impeachment is provided for in the constitutional laws of many countries including Brazil, France, India, Ireland, the Philippines, Russia, South Korea, and the United States. It is distinct from the motion of no confidence procedure available in some countries whereby a motion of censure can be used to remove a government and its ministers from office. Such a procedure is not applicable in countries with presidential forms of government like the United States.
2001-12-06T02:01:20Z
2023-12-14T14:19:03Z
[ "Template:Main", "Template:Unreferenced section", "Template:Circular reference", "Template:Section link", "Template:Webarchive", "Template:Commonscatinline", "Template:Cite news", "Template:More citations needed", "Template:Frac", "Template:Better source needed", "Template:Cite journal", "Template:Short description", "Template:Use dmy dates", "Template:Anchor", "Template:Reflist", "Template:Cite book", "Template:Cite encyclopedia", "Template:Wiktionary-inline", "Template:About", "Template:Rp", "Template:See also", "Template:Cite web", "Template:Lang", "Template:Nbsp", "Template:Cbignore", "Template:Authority control", "Template:Dead link", "Template:Main articles" ]
https://en.wikipedia.org/wiki/Impeachment
15,334
Ibizan Hound
The Ibizan Hound (Spanish: podenco ibicenco, Catalan: ca eivissenc) is a lean, agile dog of the hound family. There are two hair types of the breed: smooth and wire. The more commonly seen type is the smooth. Some consider there to be a third type, long, but the longhair is most likely a variation of the wire. The Ibizan Hound is an elegant and agile breed, with an athletic and attractive outline and a ground-covering springy trot. Though graceful in appearance, it has good bone girth and is a rugged/hardy breed. Its large upright ears — a hallmark of the breed — are broad at the base and frame a long and elegant headpiece. The neck is long and lean. It has a unique front assembly with well laid-back shoulders and relatively straight upper arm. Coming in both smooth and wire-coated varieties, their coat is a combination of red and white with the nose, ears, eye rims, and pads of feet being a light tan color. Its eyes are a striking amber color and have an alert and intelligent expression. The Ibizan may range in height, depending on which Standard you follow, from 20 to 30 in (51 to 76 cm) and weigh from 42 to 63 lb (19 to 29 kg), males being larger than females. Ibizan Hounds are intelligent, active, and engaging by nature. They rank 53rd in Stanley Coren's book The Intelligence of Dogs, considered average working/obedience intelligence, but many Ibizan owners enjoy recounting a multitude of examples of their problem-solving abilities. They are true "clowns" of the dog world, delighting in entertaining their people with their antics. Though somewhat independent and stubborn at times, they do take well to training if positive methods are used, but they will balk at punitive training methods. They are generally quiet but will alarm bark if necessary, so they make good watch dogs. They are sensitive hounds, and very good around children and other dogs alike. They generally make good house dogs but are active and athletic, therefore need a lot of daily exercise. They do not make good kennel dogs. Ibizan hounds are sweet, but they are very stubborn and independent. Ibizan Hounds are "escapologists": they are able to jump incredible heights from a standstill, so they need very tall fences. They also have been known to climb, and many can escape from crates and can open baby gates and even locks. They have a strong prey drive, therefore they cannot be trusted off leash unless in a safely enclosed area. Once off the leash, they might not come back for a long time. A hound that knows where its home is and the surrounding area will usually return unscathed. The Ibizan Hound is typical of the hound group in that it rarely suffers from hereditary illness. Minor health concerns for the breed include seizures and allergies; very rarely, one will see axonal dystrophy, cataract, retinal dysplasia and deafness in the breed. Ibizan Hound owners should have their dogs' eyes tested by a veterinarian before breeding. CERF and BAER testing is recommended for the breed. Ibizan Hounds are sensitive to barbiturate anesthesia, and typically live between 12 and 14 years. DNA analysis indicates that the breed was formed recently from other breeds. The Ibizan Hound is similar in function and type to several breeds, such as the Pharaoh Hound, the Cirneco dell'Etna, the Portuguese Podengo, and the Podenco Canario. The Ibizan Hound is the largest of these breeds, classified by the Fédération Cynologique Internationale as primitive types. This breed originates in the island of Ibiza and has been traditionally used in the Catalan-speaking areas of Spain, and France where it was known under the name of le charnigue, to hunt rabbits and other small game. The Ibizan Hound is a fast dog that can hunt on all types of terrain, working by scent, sound and sight. Hunters run these dogs in mostly female packs, with perhaps a male or two, as the female is considered the better hunter. Traditionally a farmer may have one dog and a very well off farmer two dogs to catch rabbits for food. However, in the last twenty years it is seen as a sport where between five and fifteen dogs can be seen in the chase of one rabbit. The Ibizan Hound authority Miquel Rosselló has provided a detailed description of a working trial which characterises their typical hunting technique and action, strikingly illustrated with action photos by Charles Camberoque which demonstrate hunt behaviour and typical hunt terrain. While local hunters will at times use one dog or a brace, and frequently packs of six to eight or as many as fifteen, the working trial requires an evaluation of one or two braces. A brace is called a colla. The couples should be tested on at least two to five rabbits (not hares), without the use of any other hunting aid. An inspection and evaluation of the exterior, fitness, character and obedience of the dogs is recommended prior to the hunt. The trial is qualified as having 5 parts. The dogs should show: (1) careful tracking and scenting of the rabbit, without being distracted in the least, 0-30 points; (2) correct signalling of the game, patient stand, strong jump into the air, obedience 0-10 points; (3) chase, giving tongue, speed, sureness, anticipation 0-30 points; (4) putting the game to cover at close quarters, listening, waiting, obedience, correct attack 0-10 point; and (5) good catch, or correct indication of the game's location, retrieval, obedience 0-20 points. Individual dogs are expected to show a great degree of discipline, obedience and co-operation. They should be extremely agile, have good speed and a powerful vertical jump from a stationary position in rough and often heavily covered ground. They should have excellent scent-tracking abilities, give tongue at the right time when approaching the game closely, and otherwise be silent so that they can locate the game by sound. In the United States, the Ibizan Hound is frequently competed in lure coursing through the AKC and ASFA, and also competes in LGRA straight racing and NOTRA oval track racing. Some parts of the country also use them for coursing live prey, generally jackrabbits. The Ibizan Hound breed is recognized by the Fédération Cynologique Internationale, Continental Kennel Club, American Kennel Club, United Kennel Club, Kennel Club of Great Britain, Canadian Kennel Club, National Kennel Club, New Zealand Kennel Club, Australian National Kennel Council, America's Pet Registry, and American Canine Registry. It was fully recognized by the American Kennel Club in 1979. According to journalist Norman Lewis, when an owner no longer wants to own one of these dogs (having too much of an appetite, for instance), it is considered very bad luck to kill the dog. Instead, they release the dog on the other side of the island, so that someone else might 'adopt' the animal.
[ { "paragraph_id": 0, "text": "The Ibizan Hound (Spanish: podenco ibicenco, Catalan: ca eivissenc) is a lean, agile dog of the hound family. There are two hair types of the breed: smooth and wire. The more commonly seen type is the smooth. Some consider there to be a third type, long, but the longhair is most likely a variation of the wire.", "title": "" }, { "paragraph_id": 1, "text": "The Ibizan Hound is an elegant and agile breed, with an athletic and attractive outline and a ground-covering springy trot. Though graceful in appearance, it has good bone girth and is a rugged/hardy breed. Its large upright ears — a hallmark of the breed — are broad at the base and frame a long and elegant headpiece. The neck is long and lean. It has a unique front assembly with well laid-back shoulders and relatively straight upper arm. Coming in both smooth and wire-coated varieties, their coat is a combination of red and white with the nose, ears, eye rims, and pads of feet being a light tan color. Its eyes are a striking amber color and have an alert and intelligent expression. The Ibizan may range in height, depending on which Standard you follow, from 20 to 30 in (51 to 76 cm) and weigh from 42 to 63 lb (19 to 29 kg), males being larger than females.", "title": "Description" }, { "paragraph_id": 2, "text": "Ibizan Hounds are intelligent, active, and engaging by nature. They rank 53rd in Stanley Coren's book The Intelligence of Dogs, considered average working/obedience intelligence, but many Ibizan owners enjoy recounting a multitude of examples of their problem-solving abilities. They are true \"clowns\" of the dog world, delighting in entertaining their people with their antics. Though somewhat independent and stubborn at times, they do take well to training if positive methods are used, but they will balk at punitive training methods. They are generally quiet but will alarm bark if necessary, so they make good watch dogs. They are sensitive hounds, and very good around children and other dogs alike. They generally make good house dogs but are active and athletic, therefore need a lot of daily exercise. They do not make good kennel dogs. Ibizan hounds are sweet, but they are very stubborn and independent.", "title": "Description" }, { "paragraph_id": 3, "text": "Ibizan Hounds are \"escapologists\": they are able to jump incredible heights from a standstill, so they need very tall fences. They also have been known to climb, and many can escape from crates and can open baby gates and even locks. They have a strong prey drive, therefore they cannot be trusted off leash unless in a safely enclosed area. Once off the leash, they might not come back for a long time. A hound that knows where its home is and the surrounding area will usually return unscathed.", "title": "Description" }, { "paragraph_id": 4, "text": "The Ibizan Hound is typical of the hound group in that it rarely suffers from hereditary illness. Minor health concerns for the breed include seizures and allergies; very rarely, one will see axonal dystrophy, cataract, retinal dysplasia and deafness in the breed. Ibizan Hound owners should have their dogs' eyes tested by a veterinarian before breeding. CERF and BAER testing is recommended for the breed. Ibizan Hounds are sensitive to barbiturate anesthesia, and typically live between 12 and 14 years.", "title": "Health" }, { "paragraph_id": 5, "text": "DNA analysis indicates that the breed was formed recently from other breeds.", "title": "History" }, { "paragraph_id": 6, "text": "The Ibizan Hound is similar in function and type to several breeds, such as the Pharaoh Hound, the Cirneco dell'Etna, the Portuguese Podengo, and the Podenco Canario. The Ibizan Hound is the largest of these breeds, classified by the Fédération Cynologique Internationale as primitive types.", "title": "History" }, { "paragraph_id": 7, "text": "This breed originates in the island of Ibiza and has been traditionally used in the Catalan-speaking areas of Spain, and France where it was known under the name of le charnigue, to hunt rabbits and other small game. The Ibizan Hound is a fast dog that can hunt on all types of terrain, working by scent, sound and sight. Hunters run these dogs in mostly female packs, with perhaps a male or two, as the female is considered the better hunter.", "title": "Use" }, { "paragraph_id": 8, "text": "Traditionally a farmer may have one dog and a very well off farmer two dogs to catch rabbits for food. However, in the last twenty years it is seen as a sport where between five and fifteen dogs can be seen in the chase of one rabbit.", "title": "Use" }, { "paragraph_id": 9, "text": "The Ibizan Hound authority Miquel Rosselló has provided a detailed description of a working trial which characterises their typical hunting technique and action, strikingly illustrated with action photos by Charles Camberoque which demonstrate hunt behaviour and typical hunt terrain. While local hunters will at times use one dog or a brace, and frequently packs of six to eight or as many as fifteen, the working trial requires an evaluation of one or two braces. A brace is called a colla. The couples should be tested on at least two to five rabbits (not hares), without the use of any other hunting aid. An inspection and evaluation of the exterior, fitness, character and obedience of the dogs is recommended prior to the hunt. The trial is qualified as having 5 parts. The dogs should show: (1) careful tracking and scenting of the rabbit, without being distracted in the least, 0-30 points; (2) correct signalling of the game, patient stand, strong jump into the air, obedience 0-10 points; (3) chase, giving tongue, speed, sureness, anticipation 0-30 points; (4) putting the game to cover at close quarters, listening, waiting, obedience, correct attack 0-10 point; and (5) good catch, or correct indication of the game's location, retrieval, obedience 0-20 points.", "title": "Use" }, { "paragraph_id": 10, "text": "Individual dogs are expected to show a great degree of discipline, obedience and co-operation. They should be extremely agile, have good speed and a powerful vertical jump from a stationary position in rough and often heavily covered ground. They should have excellent scent-tracking abilities, give tongue at the right time when approaching the game closely, and otherwise be silent so that they can locate the game by sound.", "title": "Use" }, { "paragraph_id": 11, "text": "In the United States, the Ibizan Hound is frequently competed in lure coursing through the AKC and ASFA, and also competes in LGRA straight racing and NOTRA oval track racing. Some parts of the country also use them for coursing live prey, generally jackrabbits.", "title": "Use" }, { "paragraph_id": 12, "text": "The Ibizan Hound breed is recognized by the Fédération Cynologique Internationale, Continental Kennel Club, American Kennel Club, United Kennel Club, Kennel Club of Great Britain, Canadian Kennel Club, National Kennel Club, New Zealand Kennel Club, Australian National Kennel Council, America's Pet Registry, and American Canine Registry. It was fully recognized by the American Kennel Club in 1979.", "title": "Use" }, { "paragraph_id": 13, "text": "According to journalist Norman Lewis, when an owner no longer wants to own one of these dogs (having too much of an appetite, for instance), it is considered very bad luck to kill the dog. Instead, they release the dog on the other side of the island, so that someone else might 'adopt' the animal.", "title": "In folk culture" } ]
The Ibizan Hound is a lean, agile dog of the hound family. There are two hair types of the breed: smooth and wire. The more commonly seen type is the smooth. Some consider there to be a third type, long, but the longhair is most likely a variation of the wire.
2001-12-08T11:54:42Z
2023-10-31T23:34:06Z
[ "Template:Unreferenced section", "Template:Cvt", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:Spanish dogs", "Template:Hounds", "Template:Cn", "Template:Cite journal", "Template:Primitive dogs", "Template:Authority control", "Template:Infobox Dogbreed", "Template:Lang-es", "Template:Lang-ca", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Ibizan_Hound
15,335
Irish Wolfhound
The Irish Wolfhound is a British breed of large sighthound and one of the largest of all breeds of dog. The modern breed was developed in the late 19th century by G.A. Graham, whose aim was to recreate the extinct Irish wolfhounds of Ireland. Classified by recent genetic research into the Sighthound United Kingdom Rural Clade (Fig. S2), the breed is used by coursing hunters who have prized it for its ability to dispatch game caught by other, swifter sighthounds. In 1902, the Irish Wolfhound was declared the regimental mascot of the Irish Guards. In 391, there is a reference to large dogs by Quintus Aurelius Symmachus, a Roman Consul who got seven "canes Scotici" as a gift to be used for fighting lions and bears, and who wrote "all Rome viewed (them) with wonder". Scoti is a Latin name for the Gaels (ancient Irish). Dansey, the early 19th century translator of the first complete version of Arrian's work in English, On Coursing, suggested the Irish and Scottish "greyhounds" were derived from the same ancestor, the vertragus, and had expanded with the Scoti from Ireland across the Western Isles and into what is today Scotland. The dog-type is imagined by some to be very old. Wolfhounds were used as hunting dogs by the Gaels, who called them Cú Faoil (Irish: Cú Faoil [ˌkuː ˈfˠiːlʲ], composed of the elements "hound" and "wolf", i.e. "wolfhound"). Dogs are mentioned as cú in Irish laws and literature dating from the sixth century or, in the case of the Sagas, from the old Irish period, AD 600–900. The word cú was often used as an epithet for warriors as well as kings, denoting that they were worthy of the respect and loyalty of a hound. Cú Chulainn, a mythical warrior whose name means "hound of Culann", is supposed to have gained this name as a child when he slew the ferocious guard dog of Culann. As recompense he offered himself as a replacement. In discussing the systematic evidence of historic dog sizes in Ireland, the Irish zooarchaeologist Finbar McCormick stressed that no dogs of Irish Wolfhound size are known from sites of the Iron Age period of 1000 BC through to the early Christian period to 1200 AD. On the basis of the historic dog bones available, dogs of current Irish Wolfhound size seem to be a relatively modern development: "it must be concluded that the dog of Cú Chulainn was no larger than an Alsatian and not the calf-sized beast of the popular imagination". Hunting dogs were coveted and were frequently given as gifts to important personages and foreign nobles. King John of England, in about 1210, presented an Irish hound named Gelert to Llywelyn, the Prince of Wales. The poet The Hon William Robert Spencer immortalized this hound in a poem. In his Historie of Ireland, written in 1571, Edmund Campion gives a description of the hounds used for hunting wolves in the Dublin and Wicklow mountains. He says: "They (the Irish) are not without wolves and greyhounds to hunt them, bigger of bone and limb than a colt". Due to their popularity overseas many were exported to European royal houses leaving numbers in Ireland depleted. This led to a declaration by Oliver Cromwell being published in Kilkenny on 27 April 1652 to ensure that sufficient numbers remained to control the wolf population. References to the Irish Wolfhound in the 18th century tell of its great size, strength and greyhound shape as well as its scarcity. Writing in 1790, Thomas Bewick described it as the largest and most beautiful of the dog kind; about 36 inches high, generally of a white or cinnamon colour, somewhat like the Greyhound but more robust. He said that their aspect was mild, disposition peaceful, and strength so great that in combat the Mastiff or Bulldog was far from being an equal to them. The last wolf in Ireland was killed in County Carlow in 1786. It is thought to have been killed at Myshall, on the slopes of Mount Leinster, by a pack of wolfdogs kept by a Mr Watson of Ballydarton. The wolfhounds that remained in the hands of a few families, who were mainly descendants of the old Irish chieftains, were now symbols of status rather than used as hunters, and these were said to be the last of their race. Thomas Pennant (1726–1798) reported that he could find no more than three wolfdogs when he visited Ireland. At the 1836 meeting of the Geological Society of Dublin, John Scouler presented a paper titled "Notices of Animals which have disappeared from Ireland", including mention of the wolfdog. British Army officer Capt. George Augustus Graham (1833–1909), of Rednock House, Dursley, Gloucestershire, was responsible for creating the modern Irish wolfhound breed. He stated that he could not find the breed "in its original integrity" to work with: That we are in possession of the breed in its original integrity is not pretended; at the same time it is confidently believed that there are strains now existing that tracing back, more or less clearly, to the original breed; and it appears to be tolerably certain that our Deerhound is descended from that noble animal, and gives us a fair idea of what he was, though undoubtedly considerably his inferior in size and power. Based on the writings of others, Graham had formed the opinion that a dog resembling the original Irish wolfhound could be recreated through using the biggest and best examples of the Scottish Deerhound and the Great Dane, two breeds which he believed had been derived earlier from the wolfhound. For an outbreed, he used the Duchess of Newcastle's Borzoi, who had earlier proved his wolf-hunting ability in his native Russia, and a "Tibetan wolf-dog", which was probably a Tibetan Kyi Apso. In 1885, Captain Graham founded the Irish Wolfhound Club, and the Breed Standard of Points to establish and agree the ideal to which breeders should aspire. In 1902, the Irish Wolfhound was declared the regimental mascot of the Irish Guards. The Irish Wolfhound is a national symbol of Ireland and is sometimes considered the national dog of Ireland. It has also been adopted as a symbol by both rugby codes. The national rugby league team is nicknamed the Wolfhounds, and the Irish Rugby Football Union, which governs rugby union, changed the name of the country's A (second-level) national team in that code to the Ireland Wolfhounds in 2010. One of the symbols that the tax authorities in Ireland have on their revenue stamps has been the Irish wolfhound. In the video game The Elder Scrolls V: Skyrim, the Irish Wolfhound is the breed of dog for all dogs in the base game. Genomic analysis indicates that although there has been some DNA sharing between the Irish wolfhound with the Deerhound, Whippet, and Greyhound, there has been significant sharing of DNA between the Irish Wolfhound and the Great Dane. One writer has stated that for the Irish Wolfhound, "the Great Dane appearance is strongly marked too prominently before the 20th Century". George Augustus Graham created the modern Irish wolfhound breed by retaining the appearance of the original form, but not its genetic ancestry. The Irish Wolfhound is characterised by its large size. According to the FCI standard, the expected range of heights at the withers is 81–86 centimetres (32–34 inches); minimum heights and weights are 79 cm (31 in)/54.5 kg (120 lb) and 71 cm (28 in)/40.5 kg (89 lb) for dogs and bitches respectively. It is more massively built than the Scottish Deerhound, but less so than the Great Dane. The coat is hard and rough on the head, body and legs, with the beard and the hair over the eyes particularly wiry. It may be black, brindle, fawn, grey, red, pure white, or any colour seen in the Deerhound. The Irish Wolfhound is a sighthound, and hunts by visual perception alone. The neck is muscular and fairly long, and the head is carried high. It should appear to be longer than it is tall, and to be capable of catching and killing a wolf. Irish Wolfhounds have a varied range of personalities and are most often noted for their personal quirks and individualism. An Irish Wolfhound, however, is rarely mindless, and, despite its large size, is rarely found to be destructive in the house or boisterous. This is because the breed is generally introverted, intelligent, and reserved in character. An easygoing animal, the Irish Wolfhound is quiet by nature. Wolfhounds often create a strong bond with their family and can become quite destructive or morose if left alone for long periods of time. The Irish Wolfhound makes for an effective and imposing guardian. The breed becomes attached to both owners and other dogs they are raised with and is therefore not the most adaptable of breeds. Bred for independence, an Irish Wolfhound is not necessarily keen on defending spaces. A wolfhound is most easily described by its historical motto, "gentle when stroked, fierce when provoked". They should not be territorially aggressive to other domestic dogs but are born with specialized skills and, it is common for hounds at play to course another dog. This is a specific hunting behavior, not a fighting or territorial domination behavior. Most Wolfhounds are very gentle with children. The Irish Wolfhound is relatively easy to train. They respond well to firm, but gentle, consistent leadership. However, historically these dogs were required to work at great distances from their masters and think independently when hunting rather than waiting for detailed commands and this can still be seen in the breed. Irish Wolfhounds are often favored for their loyalty, affection, patience, and devotion. Although at some points in history they have been used as watchdogs, unlike some breeds, the Irish Wolfhound is usually unreliable in this role as they are often friendly toward strangers, although their size can be a natural deterrent. However, when protection is required this dog is never found wanting. When they or their family are in any perceived danger they display a fearless nature. Author and Irish Wolfhound breeder Linda Glover believes the dogs' close affinity with humans makes them acutely aware and sensitive to ill will or malicious intentions leading to their excelling as a guardian rather than guard dog. Like many large dog breeds, Irish Wolfhounds have a relatively short lifespan. Published lifespan estimations vary between 6 and 10 years with 7 years being the average. Dilated cardiomyopathy and bone cancer are the leading cause of death and like all deep-chested dogs, gastric torsion (bloat) is common; the breed is affected by hereditary intrahepatic portosystemic shunt. In a privately funded study conducted under the auspices of the Irish Wolfhound Club of America and based on an owner survey, Irish Wolfhounds in the United States from 1966 to 1986 lived to a mean age of 6.47 and died most frequently of bone cancer. A more recent study by the UK Kennel Club puts the average age of death at 7 years. Studies have shown that neutering is associated with a higher risk of bone cancer in various breeds, with one study suggesting that castration should be avoided.
[ { "paragraph_id": 0, "text": "The Irish Wolfhound is a British breed of large sighthound and one of the largest of all breeds of dog. The modern breed was developed in the late 19th century by G.A. Graham, whose aim was to recreate the extinct Irish wolfhounds of Ireland. Classified by recent genetic research into the Sighthound United Kingdom Rural Clade (Fig. S2), the breed is used by coursing hunters who have prized it for its ability to dispatch game caught by other, swifter sighthounds. In 1902, the Irish Wolfhound was declared the regimental mascot of the Irish Guards.", "title": "" }, { "paragraph_id": 1, "text": "In 391, there is a reference to large dogs by Quintus Aurelius Symmachus, a Roman Consul who got seven \"canes Scotici\" as a gift to be used for fighting lions and bears, and who wrote \"all Rome viewed (them) with wonder\". Scoti is a Latin name for the Gaels (ancient Irish). Dansey, the early 19th century translator of the first complete version of Arrian's work in English, On Coursing, suggested the Irish and Scottish \"greyhounds\" were derived from the same ancestor, the vertragus, and had expanded with the Scoti from Ireland across the Western Isles and into what is today Scotland.", "title": "History" }, { "paragraph_id": 2, "text": "The dog-type is imagined by some to be very old. Wolfhounds were used as hunting dogs by the Gaels, who called them Cú Faoil (Irish: Cú Faoil [ˌkuː ˈfˠiːlʲ], composed of the elements \"hound\" and \"wolf\", i.e. \"wolfhound\"). Dogs are mentioned as cú in Irish laws and literature dating from the sixth century or, in the case of the Sagas, from the old Irish period, AD 600–900. The word cú was often used as an epithet for warriors as well as kings, denoting that they were worthy of the respect and loyalty of a hound. Cú Chulainn, a mythical warrior whose name means \"hound of Culann\", is supposed to have gained this name as a child when he slew the ferocious guard dog of Culann. As recompense he offered himself as a replacement.", "title": "History" }, { "paragraph_id": 3, "text": "In discussing the systematic evidence of historic dog sizes in Ireland, the Irish zooarchaeologist Finbar McCormick stressed that no dogs of Irish Wolfhound size are known from sites of the Iron Age period of 1000 BC through to the early Christian period to 1200 AD. On the basis of the historic dog bones available, dogs of current Irish Wolfhound size seem to be a relatively modern development: \"it must be concluded that the dog of Cú Chulainn was no larger than an Alsatian and not the calf-sized beast of the popular imagination\".", "title": "History" }, { "paragraph_id": 4, "text": "Hunting dogs were coveted and were frequently given as gifts to important personages and foreign nobles. King John of England, in about 1210, presented an Irish hound named Gelert to Llywelyn, the Prince of Wales. The poet The Hon William Robert Spencer immortalized this hound in a poem.", "title": "History" }, { "paragraph_id": 5, "text": "In his Historie of Ireland, written in 1571, Edmund Campion gives a description of the hounds used for hunting wolves in the Dublin and Wicklow mountains. He says: \"They (the Irish) are not without wolves and greyhounds to hunt them, bigger of bone and limb than a colt\". Due to their popularity overseas many were exported to European royal houses leaving numbers in Ireland depleted. This led to a declaration by Oliver Cromwell being published in Kilkenny on 27 April 1652 to ensure that sufficient numbers remained to control the wolf population.", "title": "History" }, { "paragraph_id": 6, "text": "References to the Irish Wolfhound in the 18th century tell of its great size, strength and greyhound shape as well as its scarcity. Writing in 1790, Thomas Bewick described it as the largest and most beautiful of the dog kind; about 36 inches high, generally of a white or cinnamon colour, somewhat like the Greyhound but more robust. He said that their aspect was mild, disposition peaceful, and strength so great that in combat the Mastiff or Bulldog was far from being an equal to them.", "title": "History" }, { "paragraph_id": 7, "text": "The last wolf in Ireland was killed in County Carlow in 1786. It is thought to have been killed at Myshall, on the slopes of Mount Leinster, by a pack of wolfdogs kept by a Mr Watson of Ballydarton. The wolfhounds that remained in the hands of a few families, who were mainly descendants of the old Irish chieftains, were now symbols of status rather than used as hunters, and these were said to be the last of their race.", "title": "History" }, { "paragraph_id": 8, "text": "Thomas Pennant (1726–1798) reported that he could find no more than three wolfdogs when he visited Ireland. At the 1836 meeting of the Geological Society of Dublin, John Scouler presented a paper titled \"Notices of Animals which have disappeared from Ireland\", including mention of the wolfdog.", "title": "History" }, { "paragraph_id": 9, "text": "British Army officer Capt. George Augustus Graham (1833–1909), of Rednock House, Dursley, Gloucestershire, was responsible for creating the modern Irish wolfhound breed. He stated that he could not find the breed \"in its original integrity\" to work with:", "title": "History" }, { "paragraph_id": 10, "text": "That we are in possession of the breed in its original integrity is not pretended; at the same time it is confidently believed that there are strains now existing that tracing back, more or less clearly, to the original breed; and it appears to be tolerably certain that our Deerhound is descended from that noble animal, and gives us a fair idea of what he was, though undoubtedly considerably his inferior in size and power.", "title": "History" }, { "paragraph_id": 11, "text": "Based on the writings of others, Graham had formed the opinion that a dog resembling the original Irish wolfhound could be recreated through using the biggest and best examples of the Scottish Deerhound and the Great Dane, two breeds which he believed had been derived earlier from the wolfhound. For an outbreed, he used the Duchess of Newcastle's Borzoi, who had earlier proved his wolf-hunting ability in his native Russia, and a \"Tibetan wolf-dog\", which was probably a Tibetan Kyi Apso.", "title": "History" }, { "paragraph_id": 12, "text": "In 1885, Captain Graham founded the Irish Wolfhound Club, and the Breed Standard of Points to establish and agree the ideal to which breeders should aspire. In 1902, the Irish Wolfhound was declared the regimental mascot of the Irish Guards.", "title": "History" }, { "paragraph_id": 13, "text": "The Irish Wolfhound is a national symbol of Ireland and is sometimes considered the national dog of Ireland. It has also been adopted as a symbol by both rugby codes. The national rugby league team is nicknamed the Wolfhounds, and the Irish Rugby Football Union, which governs rugby union, changed the name of the country's A (second-level) national team in that code to the Ireland Wolfhounds in 2010. One of the symbols that the tax authorities in Ireland have on their revenue stamps has been the Irish wolfhound. In the video game The Elder Scrolls V: Skyrim, the Irish Wolfhound is the breed of dog for all dogs in the base game.", "title": "History" }, { "paragraph_id": 14, "text": "Genomic analysis indicates that although there has been some DNA sharing between the Irish wolfhound with the Deerhound, Whippet, and Greyhound, there has been significant sharing of DNA between the Irish Wolfhound and the Great Dane. One writer has stated that for the Irish Wolfhound, \"the Great Dane appearance is strongly marked too prominently before the 20th Century\". George Augustus Graham created the modern Irish wolfhound breed by retaining the appearance of the original form, but not its genetic ancestry.", "title": "History" }, { "paragraph_id": 15, "text": "The Irish Wolfhound is characterised by its large size. According to the FCI standard, the expected range of heights at the withers is 81–86 centimetres (32–34 inches); minimum heights and weights are 79 cm (31 in)/54.5 kg (120 lb) and 71 cm (28 in)/40.5 kg (89 lb) for dogs and bitches respectively. It is more massively built than the Scottish Deerhound, but less so than the Great Dane.", "title": "Characteristics" }, { "paragraph_id": 16, "text": "The coat is hard and rough on the head, body and legs, with the beard and the hair over the eyes particularly wiry. It may be black, brindle, fawn, grey, red, pure white, or any colour seen in the Deerhound.", "title": "Characteristics" }, { "paragraph_id": 17, "text": "The Irish Wolfhound is a sighthound, and hunts by visual perception alone. The neck is muscular and fairly long, and the head is carried high. It should appear to be longer than it is tall, and to be capable of catching and killing a wolf.", "title": "Characteristics" }, { "paragraph_id": 18, "text": "Irish Wolfhounds have a varied range of personalities and are most often noted for their personal quirks and individualism. An Irish Wolfhound, however, is rarely mindless, and, despite its large size, is rarely found to be destructive in the house or boisterous. This is because the breed is generally introverted, intelligent, and reserved in character. An easygoing animal, the Irish Wolfhound is quiet by nature. Wolfhounds often create a strong bond with their family and can become quite destructive or morose if left alone for long periods of time.", "title": "Temperament" }, { "paragraph_id": 19, "text": "The Irish Wolfhound makes for an effective and imposing guardian. The breed becomes attached to both owners and other dogs they are raised with and is therefore not the most adaptable of breeds. Bred for independence, an Irish Wolfhound is not necessarily keen on defending spaces. A wolfhound is most easily described by its historical motto, \"gentle when stroked, fierce when provoked\".", "title": "Temperament" }, { "paragraph_id": 20, "text": "They should not be territorially aggressive to other domestic dogs but are born with specialized skills and, it is common for hounds at play to course another dog. This is a specific hunting behavior, not a fighting or territorial domination behavior. Most Wolfhounds are very gentle with children. The Irish Wolfhound is relatively easy to train. They respond well to firm, but gentle, consistent leadership. However, historically these dogs were required to work at great distances from their masters and think independently when hunting rather than waiting for detailed commands and this can still be seen in the breed.", "title": "Temperament" }, { "paragraph_id": 21, "text": "Irish Wolfhounds are often favored for their loyalty, affection, patience, and devotion. Although at some points in history they have been used as watchdogs, unlike some breeds, the Irish Wolfhound is usually unreliable in this role as they are often friendly toward strangers, although their size can be a natural deterrent. However, when protection is required this dog is never found wanting. When they or their family are in any perceived danger they display a fearless nature. Author and Irish Wolfhound breeder Linda Glover believes the dogs' close affinity with humans makes them acutely aware and sensitive to ill will or malicious intentions leading to their excelling as a guardian rather than guard dog.", "title": "Temperament" }, { "paragraph_id": 22, "text": "Like many large dog breeds, Irish Wolfhounds have a relatively short lifespan. Published lifespan estimations vary between 6 and 10 years with 7 years being the average. Dilated cardiomyopathy and bone cancer are the leading cause of death and like all deep-chested dogs, gastric torsion (bloat) is common; the breed is affected by hereditary intrahepatic portosystemic shunt.", "title": "Health" }, { "paragraph_id": 23, "text": "In a privately funded study conducted under the auspices of the Irish Wolfhound Club of America and based on an owner survey, Irish Wolfhounds in the United States from 1966 to 1986 lived to a mean age of 6.47 and died most frequently of bone cancer. A more recent study by the UK Kennel Club puts the average age of death at 7 years.", "title": "Health" }, { "paragraph_id": 24, "text": "Studies have shown that neutering is associated with a higher risk of bone cancer in various breeds, with one study suggesting that castration should be avoided.", "title": "Health" } ]
The Irish Wolfhound is a British breed of large sighthound and one of the largest of all breeds of dog. The modern breed was developed in the late 19th century by G.A. Graham, whose aim was to recreate the extinct Irish wolfhounds of Ireland. Classified by recent genetic research into the Sighthound United Kingdom Rural Clade, the breed is used by coursing hunters who have prized it for its ability to dispatch game caught by other, swifter sighthounds. In 1902, the Irish Wolfhound was declared the regimental mascot of the Irish Guards.
2001-12-08T11:55:50Z
2023-12-31T00:02:18Z
[ "Template:Cite journal", "Template:Commons category inline", "Template:Hounds", "Template:British dogs", "Template:Cite book", "Template:Infobox dog breed", "Template:Ireland topics", "Template:Use dmy dates", "Template:IPA-ga", "Template:Citation needed", "Template:Notelist", "Template:Reflist", "Template:Lang-ga", "Template:Blockquote", "Template:Convert", "Template:R", "Template:Cite web", "Template:ISBN", "Template:Authority control", "Template:Pp-semi" ]
https://en.wikipedia.org/wiki/Irish_Wolfhound
15,336
Italian Greyhound
The Italian Greyhound (Italian: Piccolo levriero Italiano) is an Italian breed of small sighthound. It may also be called the Italian Sighthound. Small dogs of sighthound type have long been popular with nobility and royalty. Among those believed to have kept them are Frederick II, Duke of Swabia; members of the D'Este, Medici and Visconti families; the French kings Louis XI, Charles VIII, Charles IX, Louis XIII and Louis XIV; Frederick the Great of Prussia; Anne of Denmark; Catherine the Great and Queen Victoria. Dogs of this type have often been represented in sculpture – including a second-century Roman statue now in the Vatican Museums – and paintings, notably by Giotto, Sassetta and Tiepolo. Dogs of this kind were taken in the first half of the nineteenth century to the United Kingdom, where they were known as Italian Greyhounds. The first volume of The Kennel Club Calendar and Stud Book, published in 1874, lists forty of them. The first breed association was the Italian Greyhound Club, founded in Britain in 1900. Registrations by the American Kennel Club began in 1886. The history of the modern Piccolo Levriero goes back to the last years of the nineteenth century. A total of six of the dogs were shown in 1901 in Milan and Novara, two in Turin in 1902, and one in Udine in 1903. Numbers began to increase only after the First World War, partly as a result of the work of two individual breeders, Emilio Cavallini and Giulia Ajò Montecuccoli degli Erri. In this post-War period the Piccolo Levriero was bred principally in Italy, France and Germany, and some Italian breeders imported dogs from outside the country. Of the forty-five of the dogs registered in 1926–1927 by the Kennel Club Italiano (as it was then known), twenty-eight were born in Italy and seventeen were imported. The events of the Second World War brought the Piccolo Levriero close to extinction, and numbers began to recover only in the 1950s, particularly after 1951, when Maria Luisa Incontri Lotteringhi della Stufa brought the influential bitch Komtesse von Gastuna from Austria. The breed was definitively accepted by the Fédération Cynologique Internationale in October 1956, and in November of that year a breed society, the Circolo del Levriero Italiano, was formed under the auspices of the Ente Nazionale della Cinofilia Italiana; it was later renamed the Circolo del Piccolo Levriero Italiano. In the nine years from 2011 to 2019, the Ente Nazionale della Cinofilia Italiana recorded a total of 2557 new registrations of the Piccolo Levriero, with a minimum of 213 and a maximum of 333 per year. The Italian Greyhound is the smallest of the sighthounds. It weighs no more than 5 kg and stands 32 to 38 cm at the withers. It is deep in the chest, with a tucked-up abdomen, long slender legs and a long neck. The head is small; it is elongated and narrow. The gait should be high-stepping and well-sprung, with good forward extension in the trot, and a fast gallop. The coat may be solid black, grey or isabelline; white markings are accepted on the chest and feet only. Life expectancy is about 14 years. In the United States, the Ortheopedic Foundation for Animals has found the Italian Greyhound to be the least affected by hip dysplasia of 157 breeds studied, with an incidence of 0. The original function of the Piccolo Levriero was to hunt hare and rabbit; it is capable of bursts of speed up to 60 km/h (37 mph). Although assigned to the sighthound or hare-coursing groups by the Fédération Cynologique Internationale and the Ente Nazionale della Cinofilia Italiana, the Italian Sighthound is – as it was in the past – kept mostly as a companion dog. It is classified as a toy breed by the American Kennel Club and the Kennel Club of the United Kingdom.
[ { "paragraph_id": 0, "text": "The Italian Greyhound (Italian: Piccolo levriero Italiano) is an Italian breed of small sighthound. It may also be called the Italian Sighthound.", "title": "" }, { "paragraph_id": 1, "text": "Small dogs of sighthound type have long been popular with nobility and royalty. Among those believed to have kept them are Frederick II, Duke of Swabia; members of the D'Este, Medici and Visconti families; the French kings Louis XI, Charles VIII, Charles IX, Louis XIII and Louis XIV; Frederick the Great of Prussia; Anne of Denmark; Catherine the Great and Queen Victoria. Dogs of this type have often been represented in sculpture – including a second-century Roman statue now in the Vatican Museums – and paintings, notably by Giotto, Sassetta and Tiepolo.", "title": "History" }, { "paragraph_id": 2, "text": "Dogs of this kind were taken in the first half of the nineteenth century to the United Kingdom, where they were known as Italian Greyhounds. The first volume of The Kennel Club Calendar and Stud Book, published in 1874, lists forty of them. The first breed association was the Italian Greyhound Club, founded in Britain in 1900. Registrations by the American Kennel Club began in 1886.", "title": "History" }, { "paragraph_id": 3, "text": "The history of the modern Piccolo Levriero goes back to the last years of the nineteenth century. A total of six of the dogs were shown in 1901 in Milan and Novara, two in Turin in 1902, and one in Udine in 1903. Numbers began to increase only after the First World War, partly as a result of the work of two individual breeders, Emilio Cavallini and Giulia Ajò Montecuccoli degli Erri. In this post-War period the Piccolo Levriero was bred principally in Italy, France and Germany, and some Italian breeders imported dogs from outside the country. Of the forty-five of the dogs registered in 1926–1927 by the Kennel Club Italiano (as it was then known), twenty-eight were born in Italy and seventeen were imported.", "title": "History" }, { "paragraph_id": 4, "text": "The events of the Second World War brought the Piccolo Levriero close to extinction, and numbers began to recover only in the 1950s, particularly after 1951, when Maria Luisa Incontri Lotteringhi della Stufa brought the influential bitch Komtesse von Gastuna from Austria. The breed was definitively accepted by the Fédération Cynologique Internationale in October 1956, and in November of that year a breed society, the Circolo del Levriero Italiano, was formed under the auspices of the Ente Nazionale della Cinofilia Italiana; it was later renamed the Circolo del Piccolo Levriero Italiano.", "title": "History" }, { "paragraph_id": 5, "text": "In the nine years from 2011 to 2019, the Ente Nazionale della Cinofilia Italiana recorded a total of 2557 new registrations of the Piccolo Levriero, with a minimum of 213 and a maximum of 333 per year.", "title": "History" }, { "paragraph_id": 6, "text": "The Italian Greyhound is the smallest of the sighthounds. It weighs no more than 5 kg and stands 32 to 38 cm at the withers.", "title": "Characteristics" }, { "paragraph_id": 7, "text": "It is deep in the chest, with a tucked-up abdomen, long slender legs and a long neck. The head is small; it is elongated and narrow. The gait should be high-stepping and well-sprung, with good forward extension in the trot, and a fast gallop.", "title": "Characteristics" }, { "paragraph_id": 8, "text": "The coat may be solid black, grey or isabelline; white markings are accepted on the chest and feet only.", "title": "Characteristics" }, { "paragraph_id": 9, "text": "Life expectancy is about 14 years. In the United States, the Ortheopedic Foundation for Animals has found the Italian Greyhound to be the least affected by hip dysplasia of 157 breeds studied, with an incidence of 0.", "title": "Characteristics" }, { "paragraph_id": 10, "text": "The original function of the Piccolo Levriero was to hunt hare and rabbit; it is capable of bursts of speed up to 60 km/h (37 mph). Although assigned to the sighthound or hare-coursing groups by the Fédération Cynologique Internationale and the Ente Nazionale della Cinofilia Italiana, the Italian Sighthound is – as it was in the past – kept mostly as a companion dog. It is classified as a toy breed by the American Kennel Club and the Kennel Club of the United Kingdom.", "title": "Use" } ]
The Italian Greyhound is an Italian breed of small sighthound. It may also be called the Italian Sighthound.
2001-12-08T11:56:37Z
2023-12-28T17:55:54Z
[ "Template:Convert", "Template:Reflist", "Template:Toy dogs", "Template:Authority control", "Template:Nobreak", "Template:Refbegin", "Template:Refend", "Template:Hounds", "Template:Infobox dog breed", "Template:In lang", "Template:Italian dogs", "Template:Bots", "Template:Short description", "Template:Lang-it", "Template:R", "Template:Lang", "Template:Commonscat" ]
https://en.wikipedia.org/wiki/Italian_Greyhound
15,341
Into the Woods
Into the Woods is a 1987 musical with music and lyrics by Stephen Sondheim and book by James Lapine. The musical intertwines the plots of several Brothers Grimm fairy tales, exploring the consequences of the characters' wishes and quests. The main characters are taken from "Little Red Riding Hood" (spelled "Ridinghood" in the published vocal score), "Jack and the Beanstalk", "Rapunzel", "Cinderella", and several others. The musical is tied together by a story involving a childless baker and his wife and their quest to begin a family (the original beginning of the Grimm Brothers' "Rapunzel"), their interaction with a witch who has placed a curse on them, and their interaction with other storybook characters during their journey. The second collaboration between Sondheim and Lapine after Sunday in the Park with George (1984), Into the Woods debuted in San Diego at the Old Globe Theatre in 1986 and premiered on Broadway on November 5, 1987, where it won three major Tony Awards (Best Score, Best Book, and Best Actress in a Musical for Joanna Gleason), in a year dominated by The Phantom of the Opera. The musical has since been produced many times, with a 1988 U.S. national tour, a 1990 West End production, a 1997 10th-anniversary concert, a 2002 Broadway revival, a 2010 London revival, and a 2012 outdoor Shakespeare in the Park production in New York City. A Disney film adaptation, directed by Rob Marshall, was released in 2014. The film grossed over $213 million worldwide, and received three nominations at both the Academy Awards and the Golden Globe Awards. A second Broadway revival began performances on June 28, 2022, at the St. James Theatre, and opened on July 10. The production closed on January 8, 2023, and began touring the U.S. on February 18 of the same year. The narrator introduces four groups of characters: Cinderella, who wishes to attend the king's festival; Jack, who wishes his cow, Milky White, would give milk; a baker and his wife, who wish to have a child; and Little Red Ridinghood, who wishes for bread that she can bring to her grandmother. The baker's neighbor, an ugly and aging witch, reveals the couple is infertile because she cursed his father for stealing her vegetables, including magic beans, which prompted the witch's own mother to punish her with the curse of age and ugliness. The witch took the baker's father's child, Rapunzel. She explains the curse will be lifted if she is brought four ingredients—"the cow as white as milk, the cape as red as blood, the hair as yellow as corn, and the slipper as pure as gold"—within three days. If they fail to do so for any reason, they will die from agony the same way the Baker's parents have died after Rapunzel was abducted as a baby. All begin the journey into the woods: Jack to sell his beloved cow; Cinderella to her mother's grave; Little Red to her grandmother's house; and the baker, refusing his wife's help, to find the ingredients ("Prologue"). Cinderella receives a gown and golden slippers from her mother's spirit ("Cinderella at the Grave"). A mysterious man mocks Jack for valuing his cow more than a "sack of beans". Little Red meets a hungry wolf, who persuades her, with ulterior motives, to take a longer path and admire the beauty of the woods ("Hello, Little Girl"). The baker, followed by his wife, meets Jack. They convince him that the beans in the baker's father's jacket are magic and trade them for the cow; Jack bids Milky White a tearful farewell ("I Guess This Is Goodbye"). The baker has qualms about their deceit, but his wife reassures him ("Maybe They're Magic"). The witch has raised Rapunzel in a tall tower accessible only by climbing Rapunzel's long, golden hair ("Our Little World"); a prince spies Rapunzel. The baker, in pursuit of Little Red's cape ("Maybe They're Magic" Reprise), slays the wolf and rescues Little Red and her grandmother. Little Red rewards him with her cape, and reflects on her experiences ("I Know Things Now"). Jack's mother tosses aside his beans, which grow into an enormous stalk. Cinderella flees the festival, pursued by another prince, and the baker's wife hides her; asked about the ball, Cinderella is unimpressed ("A Very Nice Prince"). Spotting Cinderella's gold slippers, the baker's wife chases her and loses Milky White. The characters recite morals as the day ends ("First Midnight"). Jack describes his adventure climbing the beanstalk ("Giants in the Sky"). He gives the baker gold he stole from the giants to buy back his cow, and returns up the beanstalk to find more; the mysterious man steals the money. Cinderella's prince and Rapunzel's prince, who are brothers, compare their unobtainable amours ("Agony"). The baker's wife overhears their talk of a girl with golden hair. She fools Rapunzel and takes a piece of her hair. The mysterious man returns Milky White to the baker. The baker's wife again fails to seize Cinderella's slippers. The baker admits they must work together ("It Takes Two"). Jack arrives with a hen that lays golden eggs, but Milky White keels over dead as midnight chimes ("Second Midnight"). The Witch discovers the prince's visits and demands Rapunzel stay sheltered from the world ("Stay with Me"). Rapunzel refuses, and the witch cuts off Rapunzel's hair and banishes her. The mysterious man gives the baker money for another cow. Jack meets Little Red, now sporting a wolfskin cape and knife. She goads him into returning to the giants' home to retrieve a golden harp. Torn between staying with her prince and escaping, Cinderella leaves him a slipper as a clue ("On the Steps of the Palace") and trades shoes with the baker's wife. The baker arrives with another cow; they now have all four items. A great crash sounds, and Jack's mother reports a dead giant in her backyard. Jack returns with the harp. The witch discovers the new cow is useless, and resurrects Milky White, who is fed the ingredients but fails to give milk. The witch explains that Rapunzel's hair will not work because she touched it, and the mysterious man offers corn silk instead; Milky White produces the potion. The witch reveals the mysterious man is the baker's father, and drinks the potion. The mysterious man falls dead, the curse is broken, and the witch regains her youth and beauty, ultimately granting the Baker and his wife permission to live. Cinderella's prince seeks the girl who fits the slipper; Cinderella's desperate stepsisters mutilate their feet ("Careful My Toe"). Cinderella succeeds and becomes his bride. Rapunzel bears twins and is found by her prince. The witch finds her, and attempts to claim her back, but the witch's powers have been lost in exchange for her youth and beauty. At Cinderella's wedding, birds blind her stepsisters, and the baker's wife, now very pregnant, thanks Cinderella for her help ("So Happy" Prelude). Congratulating themselves on living "happily ever after", the characters fail to notice another beanstalk growing. The narrator continues, "Once upon a time... later." Everyone still has wishes—the baker and his wife face new frustrations with their infant son, newly rich Jack misses the kingdom in the sky, Cinderella is bored with life in the palace ("So Happy")—but is relatively content. With a tremendous crash, a giant's foot destroys the witch's garden and damages the baker's home. The baker travels to the palace, but his warning is ignored by the prince's steward and by Jack's mother. Returning home, he finds Little Red on her way to her grandmother's; he and his wife escort her. Jack decides to slay the new giant and Cinderella investigates her mother's disturbed grave. Everyone returns to the woods, but notices that the weather is more ominous ("Into the Woods" Reprise). Rapunzel, driven mad, also flees to the woods. Her prince follows and meets his brother; they confess their lust for two new women, Snow White and Sleeping Beauty ("Agony" Reprise). The baker, his wife, and Little Red find Cinderella's family and the steward, who reveal that the giant set upon the castle. The witch brings news that the giant destroyed the village and the baker's house. The giantess—widow of the giant Jack killed—appears, seeking revenge. As a sacrifice, the group offer up the narrator, who is killed. Jack's mother defends her son, angering the giantess, and the steward silences Jack's mother, inadvertently killing her. As the giantess leaves in search of Jack, Rapunzel is trampled to death leaving the distraught Witch to mourn ("Witch's Lament"). The royal family flee despite the baker's pleas to stay and fight. The witch vows to find Jack and give him to the giantess, and the baker and his wife split up to find him first. Cinderella's prince seduces the baker's wife ("Any Moment"). The baker finds Cinderella and convinces her to join their group. The baker's wife reflects on her adventure and tryst with the prince ("Moments in the Woods"), but stumbles into the giantess's path and is killed. The baker, Little Red, and Cinderella await the return of the baker's wife when the witch arrives holding Jack hostage, who is found weeping over the baker's wife's body. The baker turns against Jack, and the two, along with Cinderella and Little Red, blame each other for the situation before the four turn on the witch for cursing the father in the first place ("Your Fault"). Chastising their inability to accept their actions' consequences, the witch pelts them her remaining beans, and is struck by a curse for losing the magic beans again, disappearing ("Last Midnight"). Grief-stricken, the baker flees, but is convinced by his father's spirit to face his responsibilities ("No More"). He returns and forms a plan to kill the giantess. Cinderella stays behind with the baker's child and confronts her prince over his infidelity. He explains his feelings of unfulfillment and that he wasn't raised to be sincere, and she asks him to leave. Little Red discovers the giantess has killed her grandmother, as the baker tells Jack that his mother is dead. Jack vows to kill the steward but the baker dissuades him, while Cinderella comforts Little Red. The baker and Cinderella explain that choices have consequences, and everyone is connected ("No One Is Alone"). The four together slay the giantess, and the other characters—including the royal family, who have starved to death, and the princes and their new paramours—return to share one last set of morals. The survivors band together to hail the quartet as their heroes, and the spirit of the baker's wife comforts her mourning husband, encouraging him to tell their child their story. The baker begins to tell his son the tale, while the witch appears and warns: "Careful the things you say, children will listen" ("Finale"). Into the Woods premiered at the Old Globe Theatre in San Diego, California, on December 4, 1986, and ran for 50 performances, under the direction of James Lapine. Many of the performers from that production appeared in the Broadway cast, except for John Cunningham as the Narrator/Wolf/Steward, George Coe as the Mysterious Man/Cinderella's Father, Kenneth Marshall as Cinderella's Prince, LuAnne Ponce as Little Red, and Ellen Foley as the Witch. Kay McClelland, who played Rapunzel and Florinda, went with the cast to Broadway but only played Florinda. The show evolved, the most notable change being the addition of the song "No One Is Alone" in the middle of the run. Because of this, the finale was also altered. It was originally "Midnight/Ever After (reprise)/It Takes Two (reprise)/Into the Woods (reprise 2)" but evolved into its present form. Another notable change was that, originally, the baker and Cinderella became a couple during the finale and kissed before singing the reprise of "It Takes Two". Into the Woods opened on Broadway at the Martin Beck Theatre on November 5, 1987, and closed on September 3, 1989, after 765 performances. It starred Bernadette Peters as the Witch, Joanna Gleason as the Baker's Wife, Chip Zien as the Baker, Robert Westenberg as the Wolf/Cinderella's Prince, Tom Aldredge as the Narrator/Mysterious Man, Kim Crosby as Cinderella, Danielle Ferland as Little Red Ridinghood, Ben Wright as Jack, Chuck Wagner as Rapunzel's Prince, Barbara Bryne as Jack's Mother, Pamela Winslow as Rapunzel, Merle Louise as Cinderella's Mother/Granny/Giant's Wife, Edmund Lyndeck as Cinderella's Father, Joy Franz as Cinderella's Stepmother, Philip Hoffman as the Steward, Lauren Mitchell as Lucinda, Kay McClelland as Florinda, Jean Kelly as Snow White, and Maureen Davis as Sleeping Beauty. It was directed by Lapine, with musical staging by Lar Lubovitch, settings by Tony Straiges, lighting by Richard Nelson, costumes by Ann Hould-Ward (based on original concepts by Patricia Zipprodt and Ann Hould-Ward), and makeup by Jeff Raum. The original production won the 1988 New York Drama Critics' Circle Award and the Drama Desk Award for Best Musical, and the original cast recording won a Grammy Award. The show was nominated for ten Tony Awards, and won three: Best Score (Sondheim), Best Book (Lapine) and Best Actress in a Musical (Gleason). Peters left the show after almost five months due to a prior commitment to film the movie Slaves of New York. The Witch was then played by Betsy Joslyn (from March 30, 1988); Phylicia Rashad (from April 14, 1988); Betsy Joslyn (from July 5, 1988); Nancy Dussault (from December 13, 1988); and Ellen Foley (from August 1, 1989, until the closing). Understudies for the part included Joslyn, Marin Mazzie, Lauren Vélez, Suzzanne Douglas, and Joy Franz. Other cast replacements included Dick Cavett as the Narrator (as of July 19, 1988, as a temporary engagement after which Aldredge returned), Edmund Lyndeck as the Mysterious Man, Patricia Ben Peterson as Cinderella, LuAnne Ponce returning as Little Red, Jeff Blumenkrantz as Jack, Marin Mazzie as Rapunzel (as of March 7, 1989), Dean Butler and Don Goodspeed as Rapunzel's Prince, Susan Gordon Clark as Florinda, Teresa Burrell as Lucinda, Adam Grupper as the Steward, Cindy Robinson and Heather Shulman as Snow White, and Kay McClelland, Lauren Mitchell, Cynthia Sikes, and Mary Gordon Murray as the Baker's Wife. In 1989, from May 23 to May 25 the full original cast (with the exception of Cindy Robinson as Snow White instead of Jean Kelly) reunited for three performances to tape the show in its entirety for the Season 10 premiere episode of PBS's American Playhouse, which first aired on March 15, 1991. The show was filmed professionally with seven cameras on the set of the Martin Beck Theatre in front of an audience, with certain elements slightly changed for the recording in order to better fit the screen, such as the lighting and minor costume differences. There were also pick-up shots not filmed in front of an audience for various purposes. This video has since been released on VHS and DVD and, on occasion, remastered and rereleased. Tenth Anniversary benefit performances were held on November 9, 1997, at the Broadway Theatre (New York), with most of the original cast. Original cast understudies Chuck Wagner and Jeff Blumenkrantz played the Wolf/Cinderella's Prince and the Steward in place of Robert Westenberg and Philip Hoffmann, while Jonathan Dokuchitz (who joined the Broadway production as an understudy in 1989) played Rapunzel's Prince in place of Wagner. This concert featured the duet "Our Little World", written for the first London production of the show. On November 9, 2014, most of the original cast reunited for two reunion concerts and discussion in Costa Mesa, California. Mo Rocca hosted the reunion and interviewed Sondheim, Lapine, and each cast member. Appearing were Bernadette Peters, Joanna Gleason, Chip Zien, Danielle Ferland, Ben Wright and husband and wife Robert Westenberg and Kim Crosby. The same group presented this discussion/concert on June 21, 2015, at the Brooklyn Academy of Music, New York City. A U.S. tour started performances on November 22, 1988. The cast included Cleo Laine as the Witch, Rex Robbins as the Narrator and Mysterious Man, Ray Gill and Mary Gordon Murray as the Baker and his Wife, Kathleen Rowe McAllen as Cinderella, Chuck Wagner as the Wolf/Cinderella's Prince, Douglas Sills as Rapunzel's Prince, Robert Duncan McNeill and Charlotte Rae as Jack and his Mother, Marcus Olson as the Steward, and Susan Gordon Clark reprising her role as Florinda from the Broadway production. The set was almost completely reconstructed, and there were certain changes to the script, changing certain story elements. Cast replacements included Betsy Joslyn as the Witch, Peter Walker as the Narrator/Mysterious Man, James Weatherstone as the Wolf/Cinderella's Prince, Jonathan Hadley as Rapunzel's Prince, Marcus Olson as the Baker, later replaced by Adam Grupper (who understudied the role on Broadway), Judy McLane as the Baker's Wife, Nora Mae Lyng as Jack's Mother, later replaced by Frances Ford, Stuart Zagnit as the Steward, Jill Geddes as Cinderella, later replaced by Patricia Ben Peterson, and Kevin R. Wright as Jack. The tour played cities around the country, such as Fort Lauderdale, Florida, Los Angeles, and Atlanta. The tour ran at the John F. Kennedy Center for the Performing Arts from June to July 16, 1989, with The Washington Post's reviewer writing: "his lovely score—poised between melody and dissonance—is the perfect measure of our tenuous condition. The songs invariably follow the characters' thinking patterns, as they weigh their options and digest their experience. Needless to say, that doesn't make for traditional show-stoppers. But it does make for vivacity of another kind. And Sondheim's lyrics...are brilliant.... I think you'll find these cast members alert and engaging." The original West End production opened on September 25, 1990, at the Phoenix Theatre and closed on February 23, 1991, after 197 performances. It was directed by Richard Jones and produced by David Mirvish, with set design by Richard Hudson, choreography by Anthony Van Laast, costumes by Sue Blane, and orchestrations by Jonathan Tunick. The cast featured Julia McKenzie as the Witch, Ian Bartholomew as the Baker, Imelda Staunton as the Baker's Wife and Clive Carter as the Wolf/Cinderella's Prince. The show received seven Olivier Award nominations in 1991, winning Best Actress in a Musical (Staunton) and Best Director of a Musical (Jones). The song "Our Little World" was added. This song was a duet for the Witch and Rapunzel giving further insight into the Witch's care for her self-proclaimed daughter and the desire Rapunzel has to see the world outside her tower. The show's overall feel was darker than that of the original Broadway production. Critic Michael Billington wrote: "But the evening's triumph belongs also to director Richard Jones, set designer Richard Hudson and costume designer Sue Blane who evoke exactly the right mood of haunted theatricality. Old-fashioned footlights give the faces a sinister glow. The woods themselves are a semi-circular, black-and-silver screen punctuated with nine doors and a crazy clock: they achieve exactly the 'agreeable terror' of Gustave Dore's children's illustrations. And the effects are terrific: doors open to reveal the rotating magnified eyeball or the admonitory finger of the predatory giant." A new intimate production of the show opened (billed as the first London revival) at the Donmar Warehouse on 16 November 1998, closing on 13 February 1999. It was directed by John Crowley and designed by his brother, Bob Crowley. The cast included Clare Burt as the Witch, Nick Holder as the Baker, Sophie Thompson as the Baker's Wife, Jenna Russell as Cinderella, Sheridan Smith as Little Red, Damian Lewis as the Wolf/Cinderella's Prince, and Frank Middlemiss as the Narrator. Russell later appeared as the Baker's Wife in the 2010 Regent's Park production. Thompson won the 1999 Olivier Award for Best Actress in a Musical, while the production was nominated for Outstanding Musical Production. A revival opened at the Ahmanson Theatre in Los Angeles, running from February 1 to March 24, 2002. It was directed and choreographed with the same principal cast that later ran on Broadway. The 2002 Broadway revival, directed by Lapine and choreographed by John Carrafa, began previews on April 13, 2002, and opened April 30 at the Broadhurst Theatre, closing on December 29 after a run of 18 previews and 279 regular performances. It starred Vanessa Williams as the Witch, John McMartin as the Narrator/Mysterious Man, Stephen DeRosa as the Baker, Kerry O'Malley as the Baker's Wife, Gregg Edelman as the Wolf/Cinderella's Prince, Christopher Sieber as the Wolf/Rapunzel's Prince, Molly Ephraim as Little Red, Adam Wylie as Jack, and Laura Benanti as Cinderella. Judi Dench provided the giantess's pre-recorded voice. Lapine revised the script slightly for this production, with a cameo appearance of the Three Little Pigs restored from the earlier San Diego production. Other changes, apart from numerous small dialogue changes, included the addition of the song "Our Little World", the addition of a second Wolf who competes with the first for Little Red's attention (portrayed by the same actor as Rapunzel's Prince), the portrayal of Jack's cow by a live performer (Chad Kimball) in an intricate costume, and new lyrics for "Last Midnight", now a menacing lullaby sung by the Witch to the Baker's baby. This production featured scenic design by Douglas W. Schmidt, costume design by Susan Hilferty, lighting design by Brian MacDevitt, sound design by Dan Moses Schreier and projection design by Elaine J. McCarthy. The revival won Tonys for the Best Revival of a Musical and Best Lighting Design. This Broadway revival wardrobe is on display at the Costume World in South Florida. A revival at the Royal Opera House's Linbury Studio in Covent Garden had a limited run from June 14 to 30, 2007, followed by a short stint at The Lowry theatre, Salford Quays, Manchester on 4–7 July. The production mixed opera singers, musical theatre actors, and film and television actors, including Anne Reid as Jack's Mother and Gary Waldhorn as the Narrator. Directed by Will Tuckett, it received mixed reviews, although there were clear standout performances. The production completely sold out three weeks before opening. As this was an "opera" production, the show and its performers were overlooked in the "musical" nominations for the 2008 Laurence Olivier Awards. It featured Suzie Toase (Little Red), Peter Caulfield (Jack), Beverley Klein (Witch), Anna Francolini (Baker's Wife), Clive Rowe (Baker), Nicholas Garrett (Wolf/Cinderella's Prince), and Lara Pulver (Lucinda). This was the second Sondheim musical to be staged by the Opera House, following 2003's Sweeney Todd. The Olivier Award-winning Regent's Park Open Air Theatre production, directed by Timothy Sheader and choreographed by Liam Steel, ran for a six-week limited season from 6 August to 11 September 2010. The cast included Hannah Waddingham as the Witch, Mark Hadfield as the Baker, Jenna Russell as the Baker's Wife, Helen Dallimore as Cinderella, Michael Xavier as the Wolf/Cinderella's Prince, and Judi Dench as the recorded voice of the Giant. Gareth Valentine was the Musical Director. The musical was performed outdoors in a wooded area. Whilst the book remained mostly unchanged, the subtext of the plot was dramatically altered by casting the role of the Narrator as a young school boy lost in the woods following a family argument – a device used to further illustrate the musical's themes of parenting and adolescence. The production opened to wide critical acclaim, much of the press commenting on the effectiveness of the open air setting. The Telegraph reviewer, for example, wrote: "It is an inspired idea to stage this show in the magical, sylvan surroundings of Regent's Park, and designer Soutra Gilmour has come up with a marvellously rickety, adventure playground of a set, all ladders, stairs and elevated walkways, with Rapunzel discovered high up in a tree." The New York Times reviewer commented: "The natural environment makes for something genuinely haunting and mysterious as night falls on the audience". Sondheim attended twice, reportedly extremely pleased with the production. The production also won the Laurence Olivier Award for Best Musical Revival and Xavier was nominated for the Laurence Olivier Award for Best Performance in a Supporting Role in a Musical. The production was recorded in its entirety, available to download and watch from Digital Theatre. The Regent's Park Open Air Theatre production transferred to the Public Theater's 2012 summer series of free performances Shakespeare in the Park at the Delacorte Theater in Central Park, New York, with an American cast as well as new designers. Sheader again was the director and Steel served as co-director and choreographer. Performances were originally to run from July 24 (delayed from July 23 due to the weather) to August 25, 2012, but the show was extended till September 1. The cast included Amy Adams as the Baker's Wife, Donna Murphy as the Witch, Denis O'Hare as the Baker, Chip Zien (the Baker in the 1987 Broadway cast) as the Mysterious Man/Cinderella's Father, Ivan Hernandez as the Wolf/Cinderella's Prince, Jessie Mueller as Cinderella, Jack Broderick as the young Narrator, Gideon Glick as Jack, Cooper Grodin as Rapunzel's Prince, Sarah Stiles as Little Red, Josh Lamon as the Steward, and Glenn Close as the Voice of the Giant. The set was a "collaboration between original Open Air Theatre designer Soutra Gilmour and...John Lee Beatty, [and] rises over 50 feet in the air, with a series of tree-covered catwalks and pathways." The production was dedicated to Nora Ephron, who had died earlier in 2012. In February and May 2012, reports of a possible Broadway transfer surfaced with the production's principal actors in negotiations to reprise their roles. In January 2013, it was announced that the production would not transfer to Broadway due to scheduling conflicts. For its annual fully staged musical event, the Hollywood Bowl produced a limited run of Into the Woods from July 26–28, 2019, directed and choreographed by Robert Longbottom. The cast included Skylar Astin as the Baker, Sutton Foster as the Baker's Wife, Patina Miller as the Witch, Sierra Boggess as Cinderella, Cheyenne Jackson as the Wolf/Cinderella's Prince, Chris Carmack as Rapunzel's Prince, Gaten Matarazzo as Jack, Anthony Crivello as the Mysterious Man, Edward Hibbert as the Narrator, Shanice Williams as Little Red, Hailey Kilgore as Rapunzel, Rebecca Spencer as Jack's Mother, original Broadway cast member Gregory North as Cinderella's Father, and Whoopi Goldberg as the voice of the Giant. The production featured Ann Hould-Ward's costumes from the Original Broadway Production. In November 2020, it was announced that New York City Center would stage Into the Woods as part of its Encores! series. In August 2021, it was announced that Christian Borle, Sara Bareilles, Ashley Park, and Heather Headley had joined the cast as, respectively, the Baker, his Wife, Cinderella, and the Witch. Park was initially scheduled to star in the Encores! production of Thoroughly Modern Millie, but it was canceled due to the COVID-19 pandemic. Headley had played the Witch at The Muny in 2015. In December 2021, High School Musical: The Musical: The Series star Julia Lester joined the cast as Little Red Ridinghood, alongside Shereen Pimentel as Rapunzel, Jordan Donica as her Prince, and Cole Thompson as Jack. In March 2022, it was revealed that Denée Benton had replaced Park as Cinderella, with other cast members including Gavin Creel as the Wolf/Cinderella's Prince, Annie Golden as Cinderella's Mother/Granny/Giant's Wife, Ann Harada as Jack's Mother, David Patrick Kelly as the Narrator/Mysterious Man, Tiffany Denise Hobbs as Lucinda (later replaced by Ta'Nika Gibson), Brooke Ishibashi as Florinda, Kennedy Kanagawa as Milky White, Lauren Mitchell (who played Lucinda in the 1987 Broadway production) as Cinderella's Stepmother, and David Turner as the Steward. In April 2022, Neil Patrick Harris was announced as playing the Baker, replacing Borle due to a schedule conflict. Albert Guerzon also joined the cast as Cinderella's Father. Jason Forbach, Mary Kate Moore, and Cameron Johnson were the production's swings. After Jordan Donica tested positive for COVID-19, Jason Forbach played Rapunzel's Prince for the first week of performances. The production ran from May 4–15, 2022, and was directed by Encores! artistic director Lear deBessonet. This was the final Encores! show to have Rob Berman conducting the Encores! orchestra. Fewer than two weeks after closing the limited engagement at Encores!, it was announced that the production would transfer to Broadway at the St. James Theatre. The Broadway production officially opened on July 10, 2022 (with previews having begun on June 28), to universally positive reviews. Most of the Encores! cast transferred, with the additions of Brian d'Arcy James as the Baker, Patina Miller as the Witch, Phillipa Soo as Cinderella, and Joshua Henry as Rapunzel's Prince. Other new cast members included Nancy Opel as Cinderella's Stepmother, Aymee Garcia as Jack's Mother, Alysia Velez as Rapunzel, and Paul Kreppel, Diane Phelan, Alex Joseph Grayson, Felicia Curry, Delphi Borich, and Lucia Spina as understudies. From July 24 to August 2, Cheyenne Jackson temporarily filled in for Gavin Creel as the Wolf and Cinderella's Prince, reprising his roles from the Hollywood Bowl production. On July 18, 2022, Sara Bareilles revealed on her Instagram Stories that a cast album of this production was being recorded. During July, it was announced that the production, originally scheduled for an eight-week run, had extended its run through October 16. On August 4, 2022, it was announced that the entire Broadway cast would remain with the show through September 4. On September 6, married couple Stephanie J. Block and Sebastian Arcelus replaced Bareilles and James as the Baker's Wife and the Baker. Other replacements included Krysta Rodriguez as Cinderella, Katy Geraghty replacing Julia Lester as Little Red, and Jim Stanek replacing David Turner as the Steward. Montego Glover also began sharing the role of the Witch with Miller, and Andy Karl played a limited run as the Wolf/Cinderella's Prince from September 6–15, filling in for Creel. Ann Harada joined the cast reprising her role as Jack's Mother from the Encores! production on September 27. During that same month, it was announced that the production was given a final extension through January 8, 2023. On September 22, it was announced that James would return to the cast as the Baker starting October 25, and Karl would also return, this time as Rapunzel's Prince, starting October 11. The cast album was released on September 30. On October 25, it was announced that Denée Benton would join the cast reprising her role as Cinderella from the Encores! production on November 21. She left the production on December 24. On November 17, it was announced that Joaquina Kalukango would start playing the Witch from December 16 to the show's closing date January 8. Karl's extended run ended December 2 for the return of Henry. That same day, the cast recording was released on CD. On December 15, it was announced that understudy Diane Phelan would take over as Cinderella on December 26 for the last two weeks of the show's run. It was also announced that Arcelus would return to the production, replacing James as the Baker, starting January 3. The Broadway production closed on January 8, 2023. The final Broadway cast was Kalukango as the Witch, Arcelus and Block as the Baker and his Wife, Creel as the Wolf/Cinderella's Prince, Phelan as Cinderella, Cole Thompson as Jack, Geraghty as Little Red Ridinghood, Henry as Rapunzel's Prince, David Patrick Kelly as the Narrator/Mysterious Man, Harada as Jack's Mother, Opel as Cinderella's Stepmother, Velez as Rapunzel, Stanek as the Steward, Annie Golden as Cinderella's Mother/Granny/Giantess, Brooke Ishibashi as Florinda, Ta'Nika Gibson as Lucinda, Albert Guerzon as Cinderella's Father/Puppeteer, Kennedy Kanagawa as Milky White/Puppeteer, and Jason Forbach, Mary Kate Moore, Cameron Johnson, Kreppel, Grayson, Curry, Borich, Spina, and Sam Simahk as understudies. The production's cast recording won the Grammy Award for Best Musical Theater Album. The recording was released on vinyl on March 17, 2023. The production was nominated for six Tony Awards. In December 2021, it was announced that a new production of Into the Woods would take place at the Theatre Royal in Bath for 4 weeks, starting on August 17. It is directed by Terry Gilliam and Leah Hausman, who already worked together for the staging of two operas by Berlioz at the English National Opera: The Damnation of Faust in 2011 and Benvenuto Cellini in 2014. The show was first booked for the Old Vic Theatre in 2020 but was delayed due to the COVID-19 pandemic and then canceled altogether. The cast included Julian Bleach as the Mysterious Man, Nicola Hughes as the Witch, Rhashan Stone as the Baker, Alex Young as the Baker's Wife, Nathanael Campbell as the Wolf and Cinderella's Prince, Audrey Brisson as Cinderella, Barney Wilkinson as Jack, Gillian Bevan as Jack's Mother, Charlotte Jaconelli as Florinda, Maria Conneeley as Rapunzel, and Lauren Conroy as Little Red Ridinghood in her first professional stage debut. Milky White is played in pantomime by the dancer Faith Prendergast. The music director is Stephen Higgins and Jon Bausor is in charge of the production design and Anthony McDonald of the costumes. In contrast to the simultaneous Broadway revival, this production is quite visual, with elaborate sets and props, its conceit being that the characters are figures in a Victorian toy theatre a young girl is playing with. The toy theatre is taking full-stage with "giant" props (cans of beans, a watch, a vase, a doll) appearing throughout and used by the characters as elements of setting (for example, Rapunzel's tower is a pile of bean cans). Slapstick is also emphasized, "done in the spirit of what Sondheim has written". Lapine and Sondheim supported this new vision, and Sondheim gave his approval for the cast before he died. Sondheim also discussed the production with the directors over Zoom. Allegedly he liked what he saw so much that he fell off his chair laughing. Terry Gilliam had met with Sondheim in the 1990s for a film adaptation of the show that Paramount was supposed to produce, with Robin Williams and Emma Thompson as the Baker and Baker's Wife, but Gilliam refused to do it because he thought the script less good than the original. The show opened to overall positive reviews, critics praising this "hallucinogenic take", with its "imaginative imagery" and "sheer spectacle" and acclaiming Leah Hausman's "particularly crisp" choreography, while some regretted a lack of an "emotional connection between the characters and the audience" and feeling that "nothing quite develops its emotional power as much as it might". Yet all recognize the strength and the vocal talent of the cast. Special compliments often go to the "outstanding work" of Faith Prendergast as Milky White, The Guardian going as far as calling it "the most characterful presence" on the stage. The show transferred to a West End theatre in late 2022 or early 2023. On December 6, 2022, it was announced that the 2022 Broadway revival production would tour the U.S. in 2023, starting on February 18. It starred Montego Glover as the Witch, Sebastian Arcelus and Stephanie J. Block as the Baker and his Wife, Gavin Creel as the Wolf/Cinderella's Prince, Cole Thompson as Jack, Katy Geraghty as Little Red, David Patrick Kelly as the Narrator/Mysterious Man, Nancy Opel as Cinderella's Stepmother, Aymee Garcia as Jack's Mother (from Boston onward), Ta'Nika Gibson as Lucinda, Brooke Ishibashi as Florinda, Jim Stanek as the Steward, Alysia Velez as Rapunzel, and Kennedy Kanagawa as Milky White/Puppeteer, all reprising their Broadway roles. On December 15, it was announced that Diane Phelan would reprise her role as Cinderella on tour. On January 17, the rest of the cast was announced, including Broadway understudies Jason Forbach and Felicia Curry as Rapunzel's Prince and the Giant's Wife/Cinderella's Mother/Granny respectively. Other cast members included Rayanne Gonzales as Jack's Mother (in Buffalo and Washington, D.C. only), Josh Breckenridge as Cinderella's Father/Puppeteer, and Paul Kreppel, Sam Simahk, Erica Durham, Ellie Fishman, Marya Grandy, Ximone Rose, and Eddie Lopez as understudies. On January 31, it was announced the production was extending its Boston engagement by another week. It was also announced Arcelus and Block would not perform six days of the engagement. On February 25–26, Andy Karl reprised his Broadway role of Rapunzel's Prince during the opening weekend of the tour's engagement in Washington D.C. while Forbach stepped into the role of the Baker for Arcelus, who was recovering from an injury sustained earlier in the week. Forbach filled in for Arcelus for over two weeks. On February 28, Forbach announced on his Instagram Stories that he would play the Baker during Arcelus's absence in Boston. Understudy Sam Simahk played Rapunzel's Prince in his place. On March 1, it was announced during "Wonderstudy Wednesday" on Instagram that understudy Ximone Rose would play the Baker's Wife during Block's absence in Boston. From March 28 to April 2, Cameron Johnson reprised his role as an understudy while Simahk played Rapunzel's Prince. Durham took over from Ishibashi as Florinda on July 5 and Sabrina Santana joined the cast as an understudy. Krysta Rodriguez and Albert Guerzon reprised their roles of Cinderella and Cinderella's Father/Puppeteer from the Broadway production on July 13 and July 18 respectively and played the roles until the tour closed on July 30. The production ran for a ten-city engagement tour, visiting Shea's Performing Arts Center in Buffalo, New York, the Kennedy Center Opera House in Washington, D.C., the Emerson Colonial Theatre in Boston, the Miller Theater in Philadelphia, the Blumenthal Performing Arts Center in Charlotte, North Carolina, the James M. Nederlander Theatre in Chicago, the Curran Theatre in San Francisco, the Ahmanson Theatre in Los Angeles, the Tennessee Performing Arts Center in Nashville, Tennessee, and the Dr. Phillips Center in Orlando, Florida. A production played in Sydney from 19 March 1993 to 5 June 1993 at the Drama Theatre, Sydney Opera House. It starred Judi Connelli as the Witch, Geraldine Turner as the Baker's Wife, Tony Sheldon as the Baker, Philip Quast as the Wolf/Cinderella's Prince, Pippa Grandison as Cinderella, Sharon Millerchip as Little Red Ridinghood, and D. J. Foster as Rapunzel's Prince. Melbourne Theatre Company played from 17 January 1998 to 21 February 1998 at the Playhouse, Victorian Arts Centre. It starred Rhonda Burchmore as the Witch, John McTernan as the Baker, Gina Riley as the Baker's Wife, Lisa McCune as Cinderella, Robert Grubb as the Wolf/Cinderella's Prince, Peter Carroll as the Narrator/Mysterious Man, and Tamsin Carroll as Little Red Ridinghood. In 2000, there was a production starring Pat Harrington, Jr. as the Narrator, Brian d'Arcy James as the Baker, Leah Hocking as the Baker's Wife, Tracy Katz as Little Red, Liz McCartney as the Witch, and Patricia Ben Peterson as Cinderella at the Ordway Center for the Performing Arts. In 2009, a production was done in Sacramento, California by the Wells Fargo Pavilion. It starred Yvette Cason as the Witch, Jeffry Denman as the Baker, Vicki Lewis as his Wife, Tracy Katz reprising her role as Little Red Ridinghood from the first national tour, Jason Forbach as the Wolf/Rapunzel's Prince, Gordon Goodman as Cinderella's Prince, Kim Huber as Cinderella, Matthew Wolpe as Jack, and Michael G. Hawkins as the Narrator/Mysterious Man. The first professional Spanish language production, Dentro del Bosque, was produced by University of Puerto Rico Repertory Theatre and premiered in San Juan at Teatro de la Universidad (University Theatre) on March 14, 2013. The cast included Víctor Santiago as the Baker, Ana Isabelle as the Baker's Wife and Lourdes Robles as the Witch. A 25th-anniversary co-production between Baltimore's Center Stage and Westport Country Playhouse directed by Mark Lamos was notable for casting original Little Red Ridinghood, Danielle Ferland, as the Baker's Wife. The cast included Erik Liberman as the Baker, Lauren Kennedy as the Witch, Jeffry Denman as the Narrator, Nik Walker as the Wolf/Cinderella's Prince, Dana Steingold as Little Red Ridinghood, Justin Scott Brown as Jack, Jenny Latimer as Cinderella, Cheryl Stern as Jack's Mother, Robert Lenzi as Rapunzel's Prince/Cinderella's Father, Alma Cuervo as Cinderella's Stepmother/Granny/Giant's Wife, Britney Coleman as Rapunzel/Cinderella's Mother, Nikka Lanzarone as Florinda, Eleni Delopoulos as Lucinda, and Jeremy Lawrence as the Mysterious Man. The production received 2011–2012 Connecticut Critics Circle Awards for Best Production, Best Ensemble, and Steingold's Little Red Ridinghood. In 2014, a production premiered in Paris, France at the Paris' Théâtre du Châtelet from April 1–12. It starred Nicholas Garrett as the Baker, Francesca Jackson as Little Red Ridinghood, Kimy McLaren as Cinderella, Christine Buffle as the Baker's Wife, Beverley Klein as the Witch, Pascal Charbonneau and Rebecca de Pont Davies as Jack and his Mother, Damian Thantrey as the Wolf/Cinderella's Prince, David Curry as the Wolf/Rapunzel's Prince, Louise Alder as Rapunzel, and Fanny Ardant as the voice of the Giantess. The Roundabout Theatre production, directed by Noah Brody and Ben Steinfeld, began performances Off-Broadway at the Laura Pels Theatre on December 19, 2014, in previews, officially on January 22, 2015, and closed on April 12, 2015. Like the original Broadway production 28 years prior, this production had a try-out run at the Old Globe Theatre in San Diego, California from July 12, 2014 – August 17, 2014 with the opening night taking place on July 17. This new version is completely minimalistically reimagined by the Fiasco Theater Company, featuring only ten actors playing multiple parts, and one piano accompanist. A national tour of this production began on November 29, 2016. The DreamCatcher Theatre production opened in January 2015 and played a sold-out run at the Adrienne Arsht Center in Miami, Florida. Tituss Burgess starred as the Witch, the first male actor to do so. The cast also included Arielle Jacobs as the Baker's Wife, JJ Caruncho as the Baker, Justin John Moniz as the Wolf/Cinderella's Prince, Wayne LeGette as the Narrator/Mysterious Man, Annemarie Rosano as Cinderella, and Matthew Janisse as Rapunzel's Prince. The musical had a production at The Muny in Forest Park, St. Louis, Missouri running from July 21 through 28 2015. The cast included Heather Headley (Witch), Erin Dilly (Baker's Wife), Rob McClure (Baker), Ken Page (Narrator), Elena Shaddow (Cinderella), Andrew Samonsky (Wolf/Cinderella's Prince), Samantha Massell (Rapunzel), and Michael McCormick (Mysterious Man/Cinderella's Father). The Hart House Theatre production in Toronto, Ontario from January 15, 2016, to January 30, 2016, and February 9, 2023, to February 11, 2023. A production ran at the West Yorkshire Playhouse in Leeds in a collaboration with Opera North from 2 June 2016 to 25 June 2016. The Israeli premiere of the musical, אל תוך היער (El Toch Ha-ya-ar), opened in Tel Aviv in August 2016 for a limited run produced by The Tramp Productions and Stuff Like That, starring Roi Dolev as the Witch, the second male actor to do so. In 2019, there was a production done at the Patchogue Theatre starring Constantine Maroulis as the Wolf/Cinderella's Prince, Melissa Errico as the Baker's Wife, Ali Ewoldt as Cinderella, Alice Ripley as the Witch, Jim Stanek as the Baker, Alan Muraoka as the Narrator/Mysterious Man, and Darren Ritchie as Rapunzel's Prince. Also in 2019, Into the Woods was done by the Barrington Stage Company in Pittsfield, Massachusetts. It starred Mykal Kilgore as the Witch, Mara Davi as the Baker's Wife, Jonathan Raviv as the Baker, Pepe Nufrio as Rapunzel's Prince, Sarah Dacey Charles as Cinderella's Stepmother/Granny/Cinderella's Mother, Dorcas Leung as Little Red Ridinghood, Amanda Robles as Cinderella, Thom Sesma as the Narrator/Mysterious Man, Kevin Toniazzo-Naughton as the Wolf/Cinderella's Prince, Clay Singer as Jack, Zoë Aarts as Lucinda, Megan Orticelli as Florinda, and Leslie Becker as the Giant's Wife/Jack's Mother. A 2022 production staged at Arkansas Repertory Theatre featured the pre-recorded voice of former Secretary of State and one-time Presidential nominee Hillary Clinton as the Giant. In 2023, a production was done by Open Stage Theatre Company in Harrisburg, Pennsylvania. A production by Belvoir St Theatre in Sydney, Australia ran in 2023, from 23 March to 30 April. The original principal casts of notable stage productions of Into the Woods. The musical has been adapted into a child-friendly version for use by schools and young companies, with the second act completely removed, as well as almost half the material from the first. The show is shortened from the original two and a half hours to fit in a 50-minute range, and the music transposed into keys that more easily fit young voices. It is licensed through Music Theatre International Broadway Junior musicals.The plot differences from the original with the story ending on a "happy ending" In 2019, a similar adaptation, Into the Woods Sr., adapted for performance by senior citizens in community centers and nursing homes, premiered. It is available under license. A theatrical film adaptation of the musical was produced by Walt Disney Pictures, directed by Rob Marshall, and starring Meryl Streep as the Witch, Emily Blunt as the Baker's Wife, James Corden as the Baker, Anna Kendrick as Cinderella, Chris Pine as Cinderella's Prince, Daniel Huttlestone as Jack, Lilla Crawford as Little Red Ridinghood, Tracey Ullman as Jack's Mother, Billy Magnussen as Rapunzel's Prince, Christine Baranski as Cinderella's Stepmother, MacKenzie Mauzy as Rapunzel, Tammy Blanchard as Florinda, and Johnny Depp as the Wolf. The film was released on December 25, 2014. It was a critical and commercial hit, grossing over $213 million worldwide. For her performance as the Witch, Streep was nominated for the Academy Award for Best Supporting Actress. The film also received Academy Award nominations for Best Production Design and Best Costume Design. In most productions of Into the Woods, including the original Broadway production, several parts are doubled. Cinderella's Prince and the Wolf, who both cannot control their appetites, are usually played by the same actor. Similarly, so are the Narrator and the Mysterious Man, who both comment on the story while avoiding any personal involvement or responsibility. Granny and Cinderella's Mother, both matriarchal characters, are also typically played by the same person, who also gives voice to the nurturing but later murderous Giant's Wife. The show covers multiple themes: growing up, parents and children, accepting responsibility, morality, and finally, wish fulfillment and its consequences. Time Magazine's reviewers wrote that the play's "basic insight... is at heart, most fairy tales are about the loving yet embattled relationship between parents and children. Almost everything that goes wrong—which is to say, almost everything that can—arises from a failure of parental or filial duty, despite the best intentions." Stephen Holden wrote that the show's themes include parent-child relationships and the individual's responsibility to the community. The witch isn't just a scowling old hag, but a key symbol of moral ambivalence. Lapine said that the most unpleasant person (the Witch) would have the truest things to say and the "nicer" people would be less honest. In the Witch's words: "I'm not good; I'm not nice; I'm just right." Given the show's debut during the 1980s, the height of the U.S. AIDS crisis, the work has been interpreted as a parable about AIDS. In this interpretation, the Giant's Wife is a metaphor for HIV/AIDS, killing good and bad characters indiscriminately and forcing survivors to band together to stop the threat and move on from the devastation, reflecting the devastation AIDS wrought on many communities. When asked about the connection, Sondheim acknowledged that initial audiences interpreted it as an AIDS metaphor, but said that the work was not intended to be specific. The score is also notable in Sondheim's output because of its intricate reworking and development of small musical motifs. In particular, the opening words, "I wish", are set to the interval of a rising major second and this small unit is both repeated and developed throughout the show, just as Lapine's book explores the consequences of self-interest and "wishing". The dialogue is characterized by the heavy use of syncopated speech. In many instances, the characters' lines are delivered with a fixed beat that follows natural speech rhythms, but is also purposely composed in eighth, sixteenth, and quarter note rhythms as part of a spoken song. Like many Sondheim/Lapine productions, the songs contain thought-process narrative, where characters converse or think aloud. Sondheim drew on parts of his troubled childhood when writing the show. In 1987, he told Time Magazine that the "father uncomfortable with babies [was] his father, and [the] mother who regrets having had children [was] his mother."
[ { "paragraph_id": 0, "text": "Into the Woods is a 1987 musical with music and lyrics by Stephen Sondheim and book by James Lapine.", "title": "" }, { "paragraph_id": 1, "text": "The musical intertwines the plots of several Brothers Grimm fairy tales, exploring the consequences of the characters' wishes and quests. The main characters are taken from \"Little Red Riding Hood\" (spelled \"Ridinghood\" in the published vocal score), \"Jack and the Beanstalk\", \"Rapunzel\", \"Cinderella\", and several others. The musical is tied together by a story involving a childless baker and his wife and their quest to begin a family (the original beginning of the Grimm Brothers' \"Rapunzel\"), their interaction with a witch who has placed a curse on them, and their interaction with other storybook characters during their journey.", "title": "" }, { "paragraph_id": 2, "text": "The second collaboration between Sondheim and Lapine after Sunday in the Park with George (1984), Into the Woods debuted in San Diego at the Old Globe Theatre in 1986 and premiered on Broadway on November 5, 1987, where it won three major Tony Awards (Best Score, Best Book, and Best Actress in a Musical for Joanna Gleason), in a year dominated by The Phantom of the Opera. The musical has since been produced many times, with a 1988 U.S. national tour, a 1990 West End production, a 1997 10th-anniversary concert, a 2002 Broadway revival, a 2010 London revival, and a 2012 outdoor Shakespeare in the Park production in New York City.", "title": "" }, { "paragraph_id": 3, "text": "A Disney film adaptation, directed by Rob Marshall, was released in 2014. The film grossed over $213 million worldwide, and received three nominations at both the Academy Awards and the Golden Globe Awards.", "title": "" }, { "paragraph_id": 4, "text": "A second Broadway revival began performances on June 28, 2022, at the St. James Theatre, and opened on July 10. The production closed on January 8, 2023, and began touring the U.S. on February 18 of the same year.", "title": "" }, { "paragraph_id": 5, "text": "The narrator introduces four groups of characters: Cinderella, who wishes to attend the king's festival; Jack, who wishes his cow, Milky White, would give milk; a baker and his wife, who wish to have a child; and Little Red Ridinghood, who wishes for bread that she can bring to her grandmother.", "title": "Synopsis" }, { "paragraph_id": 6, "text": "The baker's neighbor, an ugly and aging witch, reveals the couple is infertile because she cursed his father for stealing her vegetables, including magic beans, which prompted the witch's own mother to punish her with the curse of age and ugliness. The witch took the baker's father's child, Rapunzel. She explains the curse will be lifted if she is brought four ingredients—\"the cow as white as milk, the cape as red as blood, the hair as yellow as corn, and the slipper as pure as gold\"—within three days. If they fail to do so for any reason, they will die from agony the same way the Baker's parents have died after Rapunzel was abducted as a baby. All begin the journey into the woods: Jack to sell his beloved cow; Cinderella to her mother's grave; Little Red to her grandmother's house; and the baker, refusing his wife's help, to find the ingredients (\"Prologue\").", "title": "Synopsis" }, { "paragraph_id": 7, "text": "Cinderella receives a gown and golden slippers from her mother's spirit (\"Cinderella at the Grave\"). A mysterious man mocks Jack for valuing his cow more than a \"sack of beans\". Little Red meets a hungry wolf, who persuades her, with ulterior motives, to take a longer path and admire the beauty of the woods (\"Hello, Little Girl\"). The baker, followed by his wife, meets Jack. They convince him that the beans in the baker's father's jacket are magic and trade them for the cow; Jack bids Milky White a tearful farewell (\"I Guess This Is Goodbye\"). The baker has qualms about their deceit, but his wife reassures him (\"Maybe They're Magic\").", "title": "Synopsis" }, { "paragraph_id": 8, "text": "The witch has raised Rapunzel in a tall tower accessible only by climbing Rapunzel's long, golden hair (\"Our Little World\"); a prince spies Rapunzel. The baker, in pursuit of Little Red's cape (\"Maybe They're Magic\" Reprise), slays the wolf and rescues Little Red and her grandmother. Little Red rewards him with her cape, and reflects on her experiences (\"I Know Things Now\"). Jack's mother tosses aside his beans, which grow into an enormous stalk. Cinderella flees the festival, pursued by another prince, and the baker's wife hides her; asked about the ball, Cinderella is unimpressed (\"A Very Nice Prince\"). Spotting Cinderella's gold slippers, the baker's wife chases her and loses Milky White. The characters recite morals as the day ends (\"First Midnight\").", "title": "Synopsis" }, { "paragraph_id": 9, "text": "Jack describes his adventure climbing the beanstalk (\"Giants in the Sky\"). He gives the baker gold he stole from the giants to buy back his cow, and returns up the beanstalk to find more; the mysterious man steals the money. Cinderella's prince and Rapunzel's prince, who are brothers, compare their unobtainable amours (\"Agony\"). The baker's wife overhears their talk of a girl with golden hair. She fools Rapunzel and takes a piece of her hair. The mysterious man returns Milky White to the baker.", "title": "Synopsis" }, { "paragraph_id": 10, "text": "The baker's wife again fails to seize Cinderella's slippers. The baker admits they must work together (\"It Takes Two\"). Jack arrives with a hen that lays golden eggs, but Milky White keels over dead as midnight chimes (\"Second Midnight\"). The Witch discovers the prince's visits and demands Rapunzel stay sheltered from the world (\"Stay with Me\"). Rapunzel refuses, and the witch cuts off Rapunzel's hair and banishes her. The mysterious man gives the baker money for another cow. Jack meets Little Red, now sporting a wolfskin cape and knife. She goads him into returning to the giants' home to retrieve a golden harp.", "title": "Synopsis" }, { "paragraph_id": 11, "text": "Torn between staying with her prince and escaping, Cinderella leaves him a slipper as a clue (\"On the Steps of the Palace\") and trades shoes with the baker's wife. The baker arrives with another cow; they now have all four items. A great crash sounds, and Jack's mother reports a dead giant in her backyard. Jack returns with the harp. The witch discovers the new cow is useless, and resurrects Milky White, who is fed the ingredients but fails to give milk. The witch explains that Rapunzel's hair will not work because she touched it, and the mysterious man offers corn silk instead; Milky White produces the potion. The witch reveals the mysterious man is the baker's father, and drinks the potion. The mysterious man falls dead, the curse is broken, and the witch regains her youth and beauty, ultimately granting the Baker and his wife permission to live.", "title": "Synopsis" }, { "paragraph_id": 12, "text": "Cinderella's prince seeks the girl who fits the slipper; Cinderella's desperate stepsisters mutilate their feet (\"Careful My Toe\"). Cinderella succeeds and becomes his bride. Rapunzel bears twins and is found by her prince. The witch finds her, and attempts to claim her back, but the witch's powers have been lost in exchange for her youth and beauty. At Cinderella's wedding, birds blind her stepsisters, and the baker's wife, now very pregnant, thanks Cinderella for her help (\"So Happy\" Prelude). Congratulating themselves on living \"happily ever after\", the characters fail to notice another beanstalk growing.", "title": "Synopsis" }, { "paragraph_id": 13, "text": "The narrator continues, \"Once upon a time... later.\" Everyone still has wishes—the baker and his wife face new frustrations with their infant son, newly rich Jack misses the kingdom in the sky, Cinderella is bored with life in the palace (\"So Happy\")—but is relatively content.", "title": "Synopsis" }, { "paragraph_id": 14, "text": "With a tremendous crash, a giant's foot destroys the witch's garden and damages the baker's home. The baker travels to the palace, but his warning is ignored by the prince's steward and by Jack's mother. Returning home, he finds Little Red on her way to her grandmother's; he and his wife escort her. Jack decides to slay the new giant and Cinderella investigates her mother's disturbed grave. Everyone returns to the woods, but notices that the weather is more ominous (\"Into the Woods\" Reprise).", "title": "Synopsis" }, { "paragraph_id": 15, "text": "Rapunzel, driven mad, also flees to the woods. Her prince follows and meets his brother; they confess their lust for two new women, Snow White and Sleeping Beauty (\"Agony\" Reprise).", "title": "Synopsis" }, { "paragraph_id": 16, "text": "The baker, his wife, and Little Red find Cinderella's family and the steward, who reveal that the giant set upon the castle. The witch brings news that the giant destroyed the village and the baker's house. The giantess—widow of the giant Jack killed—appears, seeking revenge. As a sacrifice, the group offer up the narrator, who is killed. Jack's mother defends her son, angering the giantess, and the steward silences Jack's mother, inadvertently killing her. As the giantess leaves in search of Jack, Rapunzel is trampled to death leaving the distraught Witch to mourn (\"Witch's Lament\").", "title": "Synopsis" }, { "paragraph_id": 17, "text": "The royal family flee despite the baker's pleas to stay and fight. The witch vows to find Jack and give him to the giantess, and the baker and his wife split up to find him first. Cinderella's prince seduces the baker's wife (\"Any Moment\"). The baker finds Cinderella and convinces her to join their group. The baker's wife reflects on her adventure and tryst with the prince (\"Moments in the Woods\"), but stumbles into the giantess's path and is killed.", "title": "Synopsis" }, { "paragraph_id": 18, "text": "The baker, Little Red, and Cinderella await the return of the baker's wife when the witch arrives holding Jack hostage, who is found weeping over the baker's wife's body. The baker turns against Jack, and the two, along with Cinderella and Little Red, blame each other for the situation before the four turn on the witch for cursing the father in the first place (\"Your Fault\"). Chastising their inability to accept their actions' consequences, the witch pelts them her remaining beans, and is struck by a curse for losing the magic beans again, disappearing (\"Last Midnight\").", "title": "Synopsis" }, { "paragraph_id": 19, "text": "Grief-stricken, the baker flees, but is convinced by his father's spirit to face his responsibilities (\"No More\"). He returns and forms a plan to kill the giantess. Cinderella stays behind with the baker's child and confronts her prince over his infidelity. He explains his feelings of unfulfillment and that he wasn't raised to be sincere, and she asks him to leave.", "title": "Synopsis" }, { "paragraph_id": 20, "text": "Little Red discovers the giantess has killed her grandmother, as the baker tells Jack that his mother is dead. Jack vows to kill the steward but the baker dissuades him, while Cinderella comforts Little Red. The baker and Cinderella explain that choices have consequences, and everyone is connected (\"No One Is Alone\").", "title": "Synopsis" }, { "paragraph_id": 21, "text": "The four together slay the giantess, and the other characters—including the royal family, who have starved to death, and the princes and their new paramours—return to share one last set of morals. The survivors band together to hail the quartet as their heroes, and the spirit of the baker's wife comforts her mourning husband, encouraging him to tell their child their story. The baker begins to tell his son the tale, while the witch appears and warns: \"Careful the things you say, children will listen\" (\"Finale\").", "title": "Synopsis" }, { "paragraph_id": 22, "text": "Into the Woods premiered at the Old Globe Theatre in San Diego, California, on December 4, 1986, and ran for 50 performances, under the direction of James Lapine. Many of the performers from that production appeared in the Broadway cast, except for John Cunningham as the Narrator/Wolf/Steward, George Coe as the Mysterious Man/Cinderella's Father, Kenneth Marshall as Cinderella's Prince, LuAnne Ponce as Little Red, and Ellen Foley as the Witch. Kay McClelland, who played Rapunzel and Florinda, went with the cast to Broadway but only played Florinda.", "title": "Productions" }, { "paragraph_id": 23, "text": "The show evolved, the most notable change being the addition of the song \"No One Is Alone\" in the middle of the run. Because of this, the finale was also altered. It was originally \"Midnight/Ever After (reprise)/It Takes Two (reprise)/Into the Woods (reprise 2)\" but evolved into its present form. Another notable change was that, originally, the baker and Cinderella became a couple during the finale and kissed before singing the reprise of \"It Takes Two\".", "title": "Productions" }, { "paragraph_id": 24, "text": "Into the Woods opened on Broadway at the Martin Beck Theatre on November 5, 1987, and closed on September 3, 1989, after 765 performances. It starred Bernadette Peters as the Witch, Joanna Gleason as the Baker's Wife, Chip Zien as the Baker, Robert Westenberg as the Wolf/Cinderella's Prince, Tom Aldredge as the Narrator/Mysterious Man, Kim Crosby as Cinderella, Danielle Ferland as Little Red Ridinghood, Ben Wright as Jack, Chuck Wagner as Rapunzel's Prince, Barbara Bryne as Jack's Mother, Pamela Winslow as Rapunzel, Merle Louise as Cinderella's Mother/Granny/Giant's Wife, Edmund Lyndeck as Cinderella's Father, Joy Franz as Cinderella's Stepmother, Philip Hoffman as the Steward, Lauren Mitchell as Lucinda, Kay McClelland as Florinda, Jean Kelly as Snow White, and Maureen Davis as Sleeping Beauty. It was directed by Lapine, with musical staging by Lar Lubovitch, settings by Tony Straiges, lighting by Richard Nelson, costumes by Ann Hould-Ward (based on original concepts by Patricia Zipprodt and Ann Hould-Ward), and makeup by Jeff Raum. The original production won the 1988 New York Drama Critics' Circle Award and the Drama Desk Award for Best Musical, and the original cast recording won a Grammy Award. The show was nominated for ten Tony Awards, and won three: Best Score (Sondheim), Best Book (Lapine) and Best Actress in a Musical (Gleason).", "title": "Productions" }, { "paragraph_id": 25, "text": "Peters left the show after almost five months due to a prior commitment to film the movie Slaves of New York. The Witch was then played by Betsy Joslyn (from March 30, 1988); Phylicia Rashad (from April 14, 1988); Betsy Joslyn (from July 5, 1988); Nancy Dussault (from December 13, 1988); and Ellen Foley (from August 1, 1989, until the closing). Understudies for the part included Joslyn, Marin Mazzie, Lauren Vélez, Suzzanne Douglas, and Joy Franz.", "title": "Productions" }, { "paragraph_id": 26, "text": "Other cast replacements included Dick Cavett as the Narrator (as of July 19, 1988, as a temporary engagement after which Aldredge returned), Edmund Lyndeck as the Mysterious Man, Patricia Ben Peterson as Cinderella, LuAnne Ponce returning as Little Red, Jeff Blumenkrantz as Jack, Marin Mazzie as Rapunzel (as of March 7, 1989), Dean Butler and Don Goodspeed as Rapunzel's Prince, Susan Gordon Clark as Florinda, Teresa Burrell as Lucinda, Adam Grupper as the Steward, Cindy Robinson and Heather Shulman as Snow White, and Kay McClelland, Lauren Mitchell, Cynthia Sikes, and Mary Gordon Murray as the Baker's Wife.", "title": "Productions" }, { "paragraph_id": 27, "text": "In 1989, from May 23 to May 25 the full original cast (with the exception of Cindy Robinson as Snow White instead of Jean Kelly) reunited for three performances to tape the show in its entirety for the Season 10 premiere episode of PBS's American Playhouse, which first aired on March 15, 1991. The show was filmed professionally with seven cameras on the set of the Martin Beck Theatre in front of an audience, with certain elements slightly changed for the recording in order to better fit the screen, such as the lighting and minor costume differences. There were also pick-up shots not filmed in front of an audience for various purposes. This video has since been released on VHS and DVD and, on occasion, remastered and rereleased.", "title": "Productions" }, { "paragraph_id": 28, "text": "Tenth Anniversary benefit performances were held on November 9, 1997, at the Broadway Theatre (New York), with most of the original cast. Original cast understudies Chuck Wagner and Jeff Blumenkrantz played the Wolf/Cinderella's Prince and the Steward in place of Robert Westenberg and Philip Hoffmann, while Jonathan Dokuchitz (who joined the Broadway production as an understudy in 1989) played Rapunzel's Prince in place of Wagner. This concert featured the duet \"Our Little World\", written for the first London production of the show.", "title": "Productions" }, { "paragraph_id": 29, "text": "On November 9, 2014, most of the original cast reunited for two reunion concerts and discussion in Costa Mesa, California. Mo Rocca hosted the reunion and interviewed Sondheim, Lapine, and each cast member. Appearing were Bernadette Peters, Joanna Gleason, Chip Zien, Danielle Ferland, Ben Wright and husband and wife Robert Westenberg and Kim Crosby. The same group presented this discussion/concert on June 21, 2015, at the Brooklyn Academy of Music, New York City.", "title": "Productions" }, { "paragraph_id": 30, "text": "A U.S. tour started performances on November 22, 1988. The cast included Cleo Laine as the Witch, Rex Robbins as the Narrator and Mysterious Man, Ray Gill and Mary Gordon Murray as the Baker and his Wife, Kathleen Rowe McAllen as Cinderella, Chuck Wagner as the Wolf/Cinderella's Prince, Douglas Sills as Rapunzel's Prince, Robert Duncan McNeill and Charlotte Rae as Jack and his Mother, Marcus Olson as the Steward, and Susan Gordon Clark reprising her role as Florinda from the Broadway production. The set was almost completely reconstructed, and there were certain changes to the script, changing certain story elements.", "title": "Productions" }, { "paragraph_id": 31, "text": "Cast replacements included Betsy Joslyn as the Witch, Peter Walker as the Narrator/Mysterious Man, James Weatherstone as the Wolf/Cinderella's Prince, Jonathan Hadley as Rapunzel's Prince, Marcus Olson as the Baker, later replaced by Adam Grupper (who understudied the role on Broadway), Judy McLane as the Baker's Wife, Nora Mae Lyng as Jack's Mother, later replaced by Frances Ford, Stuart Zagnit as the Steward, Jill Geddes as Cinderella, later replaced by Patricia Ben Peterson, and Kevin R. Wright as Jack.", "title": "Productions" }, { "paragraph_id": 32, "text": "The tour played cities around the country, such as Fort Lauderdale, Florida, Los Angeles, and Atlanta. The tour ran at the John F. Kennedy Center for the Performing Arts from June to July 16, 1989, with The Washington Post's reviewer writing: \"his lovely score—poised between melody and dissonance—is the perfect measure of our tenuous condition. The songs invariably follow the characters' thinking patterns, as they weigh their options and digest their experience. Needless to say, that doesn't make for traditional show-stoppers. But it does make for vivacity of another kind. And Sondheim's lyrics...are brilliant.... I think you'll find these cast members alert and engaging.\"", "title": "Productions" }, { "paragraph_id": 33, "text": "The original West End production opened on September 25, 1990, at the Phoenix Theatre and closed on February 23, 1991, after 197 performances. It was directed by Richard Jones and produced by David Mirvish, with set design by Richard Hudson, choreography by Anthony Van Laast, costumes by Sue Blane, and orchestrations by Jonathan Tunick. The cast featured Julia McKenzie as the Witch, Ian Bartholomew as the Baker, Imelda Staunton as the Baker's Wife and Clive Carter as the Wolf/Cinderella's Prince. The show received seven Olivier Award nominations in 1991, winning Best Actress in a Musical (Staunton) and Best Director of a Musical (Jones).", "title": "Productions" }, { "paragraph_id": 34, "text": "The song \"Our Little World\" was added. This song was a duet for the Witch and Rapunzel giving further insight into the Witch's care for her self-proclaimed daughter and the desire Rapunzel has to see the world outside her tower. The show's overall feel was darker than that of the original Broadway production. Critic Michael Billington wrote: \"But the evening's triumph belongs also to director Richard Jones, set designer Richard Hudson and costume designer Sue Blane who evoke exactly the right mood of haunted theatricality. Old-fashioned footlights give the faces a sinister glow. The woods themselves are a semi-circular, black-and-silver screen punctuated with nine doors and a crazy clock: they achieve exactly the 'agreeable terror' of Gustave Dore's children's illustrations. And the effects are terrific: doors open to reveal the rotating magnified eyeball or the admonitory finger of the predatory giant.\"", "title": "Productions" }, { "paragraph_id": 35, "text": "A new intimate production of the show opened (billed as the first London revival) at the Donmar Warehouse on 16 November 1998, closing on 13 February 1999. It was directed by John Crowley and designed by his brother, Bob Crowley. The cast included Clare Burt as the Witch, Nick Holder as the Baker, Sophie Thompson as the Baker's Wife, Jenna Russell as Cinderella, Sheridan Smith as Little Red, Damian Lewis as the Wolf/Cinderella's Prince, and Frank Middlemiss as the Narrator. Russell later appeared as the Baker's Wife in the 2010 Regent's Park production. Thompson won the 1999 Olivier Award for Best Actress in a Musical, while the production was nominated for Outstanding Musical Production.", "title": "Productions" }, { "paragraph_id": 36, "text": "A revival opened at the Ahmanson Theatre in Los Angeles, running from February 1 to March 24, 2002. It was directed and choreographed with the same principal cast that later ran on Broadway.", "title": "Productions" }, { "paragraph_id": 37, "text": "The 2002 Broadway revival, directed by Lapine and choreographed by John Carrafa, began previews on April 13, 2002, and opened April 30 at the Broadhurst Theatre, closing on December 29 after a run of 18 previews and 279 regular performances. It starred Vanessa Williams as the Witch, John McMartin as the Narrator/Mysterious Man, Stephen DeRosa as the Baker, Kerry O'Malley as the Baker's Wife, Gregg Edelman as the Wolf/Cinderella's Prince, Christopher Sieber as the Wolf/Rapunzel's Prince, Molly Ephraim as Little Red, Adam Wylie as Jack, and Laura Benanti as Cinderella. Judi Dench provided the giantess's pre-recorded voice.", "title": "Productions" }, { "paragraph_id": 38, "text": "Lapine revised the script slightly for this production, with a cameo appearance of the Three Little Pigs restored from the earlier San Diego production. Other changes, apart from numerous small dialogue changes, included the addition of the song \"Our Little World\", the addition of a second Wolf who competes with the first for Little Red's attention (portrayed by the same actor as Rapunzel's Prince), the portrayal of Jack's cow by a live performer (Chad Kimball) in an intricate costume, and new lyrics for \"Last Midnight\", now a menacing lullaby sung by the Witch to the Baker's baby.", "title": "Productions" }, { "paragraph_id": 39, "text": "This production featured scenic design by Douglas W. Schmidt, costume design by Susan Hilferty, lighting design by Brian MacDevitt, sound design by Dan Moses Schreier and projection design by Elaine J. McCarthy. The revival won Tonys for the Best Revival of a Musical and Best Lighting Design. This Broadway revival wardrobe is on display at the Costume World in South Florida.", "title": "Productions" }, { "paragraph_id": 40, "text": "A revival at the Royal Opera House's Linbury Studio in Covent Garden had a limited run from June 14 to 30, 2007, followed by a short stint at The Lowry theatre, Salford Quays, Manchester on 4–7 July. The production mixed opera singers, musical theatre actors, and film and television actors, including Anne Reid as Jack's Mother and Gary Waldhorn as the Narrator. Directed by Will Tuckett, it received mixed reviews, although there were clear standout performances.", "title": "Productions" }, { "paragraph_id": 41, "text": "The production completely sold out three weeks before opening. As this was an \"opera\" production, the show and its performers were overlooked in the \"musical\" nominations for the 2008 Laurence Olivier Awards. It featured Suzie Toase (Little Red), Peter Caulfield (Jack), Beverley Klein (Witch), Anna Francolini (Baker's Wife), Clive Rowe (Baker), Nicholas Garrett (Wolf/Cinderella's Prince), and Lara Pulver (Lucinda). This was the second Sondheim musical to be staged by the Opera House, following 2003's Sweeney Todd.", "title": "Productions" }, { "paragraph_id": 42, "text": "The Olivier Award-winning Regent's Park Open Air Theatre production, directed by Timothy Sheader and choreographed by Liam Steel, ran for a six-week limited season from 6 August to 11 September 2010. The cast included Hannah Waddingham as the Witch, Mark Hadfield as the Baker, Jenna Russell as the Baker's Wife, Helen Dallimore as Cinderella, Michael Xavier as the Wolf/Cinderella's Prince, and Judi Dench as the recorded voice of the Giant. Gareth Valentine was the Musical Director. The musical was performed outdoors in a wooded area. Whilst the book remained mostly unchanged, the subtext of the plot was dramatically altered by casting the role of the Narrator as a young school boy lost in the woods following a family argument – a device used to further illustrate the musical's themes of parenting and adolescence.", "title": "Productions" }, { "paragraph_id": 43, "text": "The production opened to wide critical acclaim, much of the press commenting on the effectiveness of the open air setting. The Telegraph reviewer, for example, wrote: \"It is an inspired idea to stage this show in the magical, sylvan surroundings of Regent's Park, and designer Soutra Gilmour has come up with a marvellously rickety, adventure playground of a set, all ladders, stairs and elevated walkways, with Rapunzel discovered high up in a tree.\" The New York Times reviewer commented: \"The natural environment makes for something genuinely haunting and mysterious as night falls on the audience\". Sondheim attended twice, reportedly extremely pleased with the production. The production also won the Laurence Olivier Award for Best Musical Revival and Xavier was nominated for the Laurence Olivier Award for Best Performance in a Supporting Role in a Musical.", "title": "Productions" }, { "paragraph_id": 44, "text": "The production was recorded in its entirety, available to download and watch from Digital Theatre.", "title": "Productions" }, { "paragraph_id": 45, "text": "The Regent's Park Open Air Theatre production transferred to the Public Theater's 2012 summer series of free performances Shakespeare in the Park at the Delacorte Theater in Central Park, New York, with an American cast as well as new designers. Sheader again was the director and Steel served as co-director and choreographer. Performances were originally to run from July 24 (delayed from July 23 due to the weather) to August 25, 2012, but the show was extended till September 1. The cast included Amy Adams as the Baker's Wife, Donna Murphy as the Witch, Denis O'Hare as the Baker, Chip Zien (the Baker in the 1987 Broadway cast) as the Mysterious Man/Cinderella's Father, Ivan Hernandez as the Wolf/Cinderella's Prince, Jessie Mueller as Cinderella, Jack Broderick as the young Narrator, Gideon Glick as Jack, Cooper Grodin as Rapunzel's Prince, Sarah Stiles as Little Red, Josh Lamon as the Steward, and Glenn Close as the Voice of the Giant. The set was a \"collaboration between original Open Air Theatre designer Soutra Gilmour and...John Lee Beatty, [and] rises over 50 feet in the air, with a series of tree-covered catwalks and pathways.\" The production was dedicated to Nora Ephron, who had died earlier in 2012. In February and May 2012, reports of a possible Broadway transfer surfaced with the production's principal actors in negotiations to reprise their roles. In January 2013, it was announced that the production would not transfer to Broadway due to scheduling conflicts.", "title": "Productions" }, { "paragraph_id": 46, "text": "For its annual fully staged musical event, the Hollywood Bowl produced a limited run of Into the Woods from July 26–28, 2019, directed and choreographed by Robert Longbottom. The cast included Skylar Astin as the Baker, Sutton Foster as the Baker's Wife, Patina Miller as the Witch, Sierra Boggess as Cinderella, Cheyenne Jackson as the Wolf/Cinderella's Prince, Chris Carmack as Rapunzel's Prince, Gaten Matarazzo as Jack, Anthony Crivello as the Mysterious Man, Edward Hibbert as the Narrator, Shanice Williams as Little Red, Hailey Kilgore as Rapunzel, Rebecca Spencer as Jack's Mother, original Broadway cast member Gregory North as Cinderella's Father, and Whoopi Goldberg as the voice of the Giant. The production featured Ann Hould-Ward's costumes from the Original Broadway Production.", "title": "Productions" }, { "paragraph_id": 47, "text": "In November 2020, it was announced that New York City Center would stage Into the Woods as part of its Encores! series. In August 2021, it was announced that Christian Borle, Sara Bareilles, Ashley Park, and Heather Headley had joined the cast as, respectively, the Baker, his Wife, Cinderella, and the Witch. Park was initially scheduled to star in the Encores! production of Thoroughly Modern Millie, but it was canceled due to the COVID-19 pandemic. Headley had played the Witch at The Muny in 2015.", "title": "Productions" }, { "paragraph_id": 48, "text": "In December 2021, High School Musical: The Musical: The Series star Julia Lester joined the cast as Little Red Ridinghood, alongside Shereen Pimentel as Rapunzel, Jordan Donica as her Prince, and Cole Thompson as Jack. In March 2022, it was revealed that Denée Benton had replaced Park as Cinderella, with other cast members including Gavin Creel as the Wolf/Cinderella's Prince, Annie Golden as Cinderella's Mother/Granny/Giant's Wife, Ann Harada as Jack's Mother, David Patrick Kelly as the Narrator/Mysterious Man, Tiffany Denise Hobbs as Lucinda (later replaced by Ta'Nika Gibson), Brooke Ishibashi as Florinda, Kennedy Kanagawa as Milky White, Lauren Mitchell (who played Lucinda in the 1987 Broadway production) as Cinderella's Stepmother, and David Turner as the Steward. In April 2022, Neil Patrick Harris was announced as playing the Baker, replacing Borle due to a schedule conflict. Albert Guerzon also joined the cast as Cinderella's Father. Jason Forbach, Mary Kate Moore, and Cameron Johnson were the production's swings. After Jordan Donica tested positive for COVID-19, Jason Forbach played Rapunzel's Prince for the first week of performances.", "title": "Productions" }, { "paragraph_id": 49, "text": "The production ran from May 4–15, 2022, and was directed by Encores! artistic director Lear deBessonet. This was the final Encores! show to have Rob Berman conducting the Encores! orchestra.", "title": "Productions" }, { "paragraph_id": 50, "text": "Fewer than two weeks after closing the limited engagement at Encores!, it was announced that the production would transfer to Broadway at the St. James Theatre. The Broadway production officially opened on July 10, 2022 (with previews having begun on June 28), to universally positive reviews.", "title": "Productions" }, { "paragraph_id": 51, "text": "Most of the Encores! cast transferred, with the additions of Brian d'Arcy James as the Baker, Patina Miller as the Witch, Phillipa Soo as Cinderella, and Joshua Henry as Rapunzel's Prince. Other new cast members included Nancy Opel as Cinderella's Stepmother, Aymee Garcia as Jack's Mother, Alysia Velez as Rapunzel, and Paul Kreppel, Diane Phelan, Alex Joseph Grayson, Felicia Curry, Delphi Borich, and Lucia Spina as understudies. From July 24 to August 2, Cheyenne Jackson temporarily filled in for Gavin Creel as the Wolf and Cinderella's Prince, reprising his roles from the Hollywood Bowl production. On July 18, 2022, Sara Bareilles revealed on her Instagram Stories that a cast album of this production was being recorded. During July, it was announced that the production, originally scheduled for an eight-week run, had extended its run through October 16.", "title": "Productions" }, { "paragraph_id": 52, "text": "On August 4, 2022, it was announced that the entire Broadway cast would remain with the show through September 4. On September 6, married couple Stephanie J. Block and Sebastian Arcelus replaced Bareilles and James as the Baker's Wife and the Baker. Other replacements included Krysta Rodriguez as Cinderella, Katy Geraghty replacing Julia Lester as Little Red, and Jim Stanek replacing David Turner as the Steward. Montego Glover also began sharing the role of the Witch with Miller, and Andy Karl played a limited run as the Wolf/Cinderella's Prince from September 6–15, filling in for Creel. Ann Harada joined the cast reprising her role as Jack's Mother from the Encores! production on September 27. During that same month, it was announced that the production was given a final extension through January 8, 2023. On September 22, it was announced that James would return to the cast as the Baker starting October 25, and Karl would also return, this time as Rapunzel's Prince, starting October 11. The cast album was released on September 30. On October 25, it was announced that Denée Benton would join the cast reprising her role as Cinderella from the Encores! production on November 21. She left the production on December 24. On November 17, it was announced that Joaquina Kalukango would start playing the Witch from December 16 to the show's closing date January 8. Karl's extended run ended December 2 for the return of Henry. That same day, the cast recording was released on CD. On December 15, it was announced that understudy Diane Phelan would take over as Cinderella on December 26 for the last two weeks of the show's run. It was also announced that Arcelus would return to the production, replacing James as the Baker, starting January 3.", "title": "Productions" }, { "paragraph_id": 53, "text": "The Broadway production closed on January 8, 2023. The final Broadway cast was Kalukango as the Witch, Arcelus and Block as the Baker and his Wife, Creel as the Wolf/Cinderella's Prince, Phelan as Cinderella, Cole Thompson as Jack, Geraghty as Little Red Ridinghood, Henry as Rapunzel's Prince, David Patrick Kelly as the Narrator/Mysterious Man, Harada as Jack's Mother, Opel as Cinderella's Stepmother, Velez as Rapunzel, Stanek as the Steward, Annie Golden as Cinderella's Mother/Granny/Giantess, Brooke Ishibashi as Florinda, Ta'Nika Gibson as Lucinda, Albert Guerzon as Cinderella's Father/Puppeteer, Kennedy Kanagawa as Milky White/Puppeteer, and Jason Forbach, Mary Kate Moore, Cameron Johnson, Kreppel, Grayson, Curry, Borich, Spina, and Sam Simahk as understudies. The production's cast recording won the Grammy Award for Best Musical Theater Album. The recording was released on vinyl on March 17, 2023. The production was nominated for six Tony Awards.", "title": "Productions" }, { "paragraph_id": 54, "text": "In December 2021, it was announced that a new production of Into the Woods would take place at the Theatre Royal in Bath for 4 weeks, starting on August 17. It is directed by Terry Gilliam and Leah Hausman, who already worked together for the staging of two operas by Berlioz at the English National Opera: The Damnation of Faust in 2011 and Benvenuto Cellini in 2014. The show was first booked for the Old Vic Theatre in 2020 but was delayed due to the COVID-19 pandemic and then canceled altogether. The cast included Julian Bleach as the Mysterious Man, Nicola Hughes as the Witch, Rhashan Stone as the Baker, Alex Young as the Baker's Wife, Nathanael Campbell as the Wolf and Cinderella's Prince, Audrey Brisson as Cinderella, Barney Wilkinson as Jack, Gillian Bevan as Jack's Mother, Charlotte Jaconelli as Florinda, Maria Conneeley as Rapunzel, and Lauren Conroy as Little Red Ridinghood in her first professional stage debut. Milky White is played in pantomime by the dancer Faith Prendergast. The music director is Stephen Higgins and Jon Bausor is in charge of the production design and Anthony McDonald of the costumes.", "title": "Productions" }, { "paragraph_id": 55, "text": "In contrast to the simultaneous Broadway revival, this production is quite visual, with elaborate sets and props, its conceit being that the characters are figures in a Victorian toy theatre a young girl is playing with. The toy theatre is taking full-stage with \"giant\" props (cans of beans, a watch, a vase, a doll) appearing throughout and used by the characters as elements of setting (for example, Rapunzel's tower is a pile of bean cans). Slapstick is also emphasized, \"done in the spirit of what Sondheim has written\".", "title": "Productions" }, { "paragraph_id": 56, "text": "Lapine and Sondheim supported this new vision, and Sondheim gave his approval for the cast before he died. Sondheim also discussed the production with the directors over Zoom. Allegedly he liked what he saw so much that he fell off his chair laughing.", "title": "Productions" }, { "paragraph_id": 57, "text": "Terry Gilliam had met with Sondheim in the 1990s for a film adaptation of the show that Paramount was supposed to produce, with Robin Williams and Emma Thompson as the Baker and Baker's Wife, but Gilliam refused to do it because he thought the script less good than the original.", "title": "Productions" }, { "paragraph_id": 58, "text": "The show opened to overall positive reviews, critics praising this \"hallucinogenic take\", with its \"imaginative imagery\" and \"sheer spectacle\" and acclaiming Leah Hausman's \"particularly crisp\" choreography, while some regretted a lack of an \"emotional connection between the characters and the audience\" and feeling that \"nothing quite develops its emotional power as much as it might\". Yet all recognize the strength and the vocal talent of the cast. Special compliments often go to the \"outstanding work\" of Faith Prendergast as Milky White, The Guardian going as far as calling it \"the most characterful presence\" on the stage.", "title": "Productions" }, { "paragraph_id": 59, "text": "The show transferred to a West End theatre in late 2022 or early 2023.", "title": "Productions" }, { "paragraph_id": 60, "text": "On December 6, 2022, it was announced that the 2022 Broadway revival production would tour the U.S. in 2023, starting on February 18. It starred Montego Glover as the Witch, Sebastian Arcelus and Stephanie J. Block as the Baker and his Wife, Gavin Creel as the Wolf/Cinderella's Prince, Cole Thompson as Jack, Katy Geraghty as Little Red, David Patrick Kelly as the Narrator/Mysterious Man, Nancy Opel as Cinderella's Stepmother, Aymee Garcia as Jack's Mother (from Boston onward), Ta'Nika Gibson as Lucinda, Brooke Ishibashi as Florinda, Jim Stanek as the Steward, Alysia Velez as Rapunzel, and Kennedy Kanagawa as Milky White/Puppeteer, all reprising their Broadway roles. On December 15, it was announced that Diane Phelan would reprise her role as Cinderella on tour. On January 17, the rest of the cast was announced, including Broadway understudies Jason Forbach and Felicia Curry as Rapunzel's Prince and the Giant's Wife/Cinderella's Mother/Granny respectively. Other cast members included Rayanne Gonzales as Jack's Mother (in Buffalo and Washington, D.C. only), Josh Breckenridge as Cinderella's Father/Puppeteer, and Paul Kreppel, Sam Simahk, Erica Durham, Ellie Fishman, Marya Grandy, Ximone Rose, and Eddie Lopez as understudies.", "title": "Productions" }, { "paragraph_id": 61, "text": "On January 31, it was announced the production was extending its Boston engagement by another week. It was also announced Arcelus and Block would not perform six days of the engagement. On February 25–26, Andy Karl reprised his Broadway role of Rapunzel's Prince during the opening weekend of the tour's engagement in Washington D.C. while Forbach stepped into the role of the Baker for Arcelus, who was recovering from an injury sustained earlier in the week. Forbach filled in for Arcelus for over two weeks. On February 28, Forbach announced on his Instagram Stories that he would play the Baker during Arcelus's absence in Boston. Understudy Sam Simahk played Rapunzel's Prince in his place. On March 1, it was announced during \"Wonderstudy Wednesday\" on Instagram that understudy Ximone Rose would play the Baker's Wife during Block's absence in Boston. From March 28 to April 2, Cameron Johnson reprised his role as an understudy while Simahk played Rapunzel's Prince. Durham took over from Ishibashi as Florinda on July 5 and Sabrina Santana joined the cast as an understudy. Krysta Rodriguez and Albert Guerzon reprised their roles of Cinderella and Cinderella's Father/Puppeteer from the Broadway production on July 13 and July 18 respectively and played the roles until the tour closed on July 30.", "title": "Productions" }, { "paragraph_id": 62, "text": "The production ran for a ten-city engagement tour, visiting Shea's Performing Arts Center in Buffalo, New York, the Kennedy Center Opera House in Washington, D.C., the Emerson Colonial Theatre in Boston, the Miller Theater in Philadelphia, the Blumenthal Performing Arts Center in Charlotte, North Carolina, the James M. Nederlander Theatre in Chicago, the Curran Theatre in San Francisco, the Ahmanson Theatre in Los Angeles, the Tennessee Performing Arts Center in Nashville, Tennessee, and the Dr. Phillips Center in Orlando, Florida.", "title": "Productions" }, { "paragraph_id": 63, "text": "A production played in Sydney from 19 March 1993 to 5 June 1993 at the Drama Theatre, Sydney Opera House. It starred Judi Connelli as the Witch, Geraldine Turner as the Baker's Wife, Tony Sheldon as the Baker, Philip Quast as the Wolf/Cinderella's Prince, Pippa Grandison as Cinderella, Sharon Millerchip as Little Red Ridinghood, and D. J. Foster as Rapunzel's Prince. Melbourne Theatre Company played from 17 January 1998 to 21 February 1998 at the Playhouse, Victorian Arts Centre. It starred Rhonda Burchmore as the Witch, John McTernan as the Baker, Gina Riley as the Baker's Wife, Lisa McCune as Cinderella, Robert Grubb as the Wolf/Cinderella's Prince, Peter Carroll as the Narrator/Mysterious Man, and Tamsin Carroll as Little Red Ridinghood. In 2000, there was a production starring Pat Harrington, Jr. as the Narrator, Brian d'Arcy James as the Baker, Leah Hocking as the Baker's Wife, Tracy Katz as Little Red, Liz McCartney as the Witch, and Patricia Ben Peterson as Cinderella at the Ordway Center for the Performing Arts.", "title": "Productions" }, { "paragraph_id": 64, "text": "In 2009, a production was done in Sacramento, California by the Wells Fargo Pavilion. It starred Yvette Cason as the Witch, Jeffry Denman as the Baker, Vicki Lewis as his Wife, Tracy Katz reprising her role as Little Red Ridinghood from the first national tour, Jason Forbach as the Wolf/Rapunzel's Prince, Gordon Goodman as Cinderella's Prince, Kim Huber as Cinderella, Matthew Wolpe as Jack, and Michael G. Hawkins as the Narrator/Mysterious Man.", "title": "Productions" }, { "paragraph_id": 65, "text": "The first professional Spanish language production, Dentro del Bosque, was produced by University of Puerto Rico Repertory Theatre and premiered in San Juan at Teatro de la Universidad (University Theatre) on March 14, 2013. The cast included Víctor Santiago as the Baker, Ana Isabelle as the Baker's Wife and Lourdes Robles as the Witch.", "title": "Productions" }, { "paragraph_id": 66, "text": "A 25th-anniversary co-production between Baltimore's Center Stage and Westport Country Playhouse directed by Mark Lamos was notable for casting original Little Red Ridinghood, Danielle Ferland, as the Baker's Wife. The cast included Erik Liberman as the Baker, Lauren Kennedy as the Witch, Jeffry Denman as the Narrator, Nik Walker as the Wolf/Cinderella's Prince, Dana Steingold as Little Red Ridinghood, Justin Scott Brown as Jack, Jenny Latimer as Cinderella, Cheryl Stern as Jack's Mother, Robert Lenzi as Rapunzel's Prince/Cinderella's Father, Alma Cuervo as Cinderella's Stepmother/Granny/Giant's Wife, Britney Coleman as Rapunzel/Cinderella's Mother, Nikka Lanzarone as Florinda, Eleni Delopoulos as Lucinda, and Jeremy Lawrence as the Mysterious Man. The production received 2011–2012 Connecticut Critics Circle Awards for Best Production, Best Ensemble, and Steingold's Little Red Ridinghood.", "title": "Productions" }, { "paragraph_id": 67, "text": "In 2014, a production premiered in Paris, France at the Paris' Théâtre du Châtelet from April 1–12. It starred Nicholas Garrett as the Baker, Francesca Jackson as Little Red Ridinghood, Kimy McLaren as Cinderella, Christine Buffle as the Baker's Wife, Beverley Klein as the Witch, Pascal Charbonneau and Rebecca de Pont Davies as Jack and his Mother, Damian Thantrey as the Wolf/Cinderella's Prince, David Curry as the Wolf/Rapunzel's Prince, Louise Alder as Rapunzel, and Fanny Ardant as the voice of the Giantess.", "title": "Productions" }, { "paragraph_id": 68, "text": "The Roundabout Theatre production, directed by Noah Brody and Ben Steinfeld, began performances Off-Broadway at the Laura Pels Theatre on December 19, 2014, in previews, officially on January 22, 2015, and closed on April 12, 2015. Like the original Broadway production 28 years prior, this production had a try-out run at the Old Globe Theatre in San Diego, California from July 12, 2014 – August 17, 2014 with the opening night taking place on July 17. This new version is completely minimalistically reimagined by the Fiasco Theater Company, featuring only ten actors playing multiple parts, and one piano accompanist. A national tour of this production began on November 29, 2016.", "title": "Productions" }, { "paragraph_id": 69, "text": "The DreamCatcher Theatre production opened in January 2015 and played a sold-out run at the Adrienne Arsht Center in Miami, Florida. Tituss Burgess starred as the Witch, the first male actor to do so. The cast also included Arielle Jacobs as the Baker's Wife, JJ Caruncho as the Baker, Justin John Moniz as the Wolf/Cinderella's Prince, Wayne LeGette as the Narrator/Mysterious Man, Annemarie Rosano as Cinderella, and Matthew Janisse as Rapunzel's Prince.", "title": "Productions" }, { "paragraph_id": 70, "text": "The musical had a production at The Muny in Forest Park, St. Louis, Missouri running from July 21 through 28 2015. The cast included Heather Headley (Witch), Erin Dilly (Baker's Wife), Rob McClure (Baker), Ken Page (Narrator), Elena Shaddow (Cinderella), Andrew Samonsky (Wolf/Cinderella's Prince), Samantha Massell (Rapunzel), and Michael McCormick (Mysterious Man/Cinderella's Father).", "title": "Productions" }, { "paragraph_id": 71, "text": "The Hart House Theatre production in Toronto, Ontario from January 15, 2016, to January 30, 2016, and February 9, 2023, to February 11, 2023. A production ran at the West Yorkshire Playhouse in Leeds in a collaboration with Opera North from 2 June 2016 to 25 June 2016.", "title": "Productions" }, { "paragraph_id": 72, "text": "The Israeli premiere of the musical, אל תוך היער (El Toch Ha-ya-ar), opened in Tel Aviv in August 2016 for a limited run produced by The Tramp Productions and Stuff Like That, starring Roi Dolev as the Witch, the second male actor to do so.", "title": "Productions" }, { "paragraph_id": 73, "text": "In 2019, there was a production done at the Patchogue Theatre starring Constantine Maroulis as the Wolf/Cinderella's Prince, Melissa Errico as the Baker's Wife, Ali Ewoldt as Cinderella, Alice Ripley as the Witch, Jim Stanek as the Baker, Alan Muraoka as the Narrator/Mysterious Man, and Darren Ritchie as Rapunzel's Prince. Also in 2019, Into the Woods was done by the Barrington Stage Company in Pittsfield, Massachusetts. It starred Mykal Kilgore as the Witch, Mara Davi as the Baker's Wife, Jonathan Raviv as the Baker, Pepe Nufrio as Rapunzel's Prince, Sarah Dacey Charles as Cinderella's Stepmother/Granny/Cinderella's Mother, Dorcas Leung as Little Red Ridinghood, Amanda Robles as Cinderella, Thom Sesma as the Narrator/Mysterious Man, Kevin Toniazzo-Naughton as the Wolf/Cinderella's Prince, Clay Singer as Jack, Zoë Aarts as Lucinda, Megan Orticelli as Florinda, and Leslie Becker as the Giant's Wife/Jack's Mother.", "title": "Productions" }, { "paragraph_id": 74, "text": "A 2022 production staged at Arkansas Repertory Theatre featured the pre-recorded voice of former Secretary of State and one-time Presidential nominee Hillary Clinton as the Giant.", "title": "Productions" }, { "paragraph_id": 75, "text": "In 2023, a production was done by Open Stage Theatre Company in Harrisburg, Pennsylvania. A production by Belvoir St Theatre in Sydney, Australia ran in 2023, from 23 March to 30 April.", "title": "Productions" }, { "paragraph_id": 76, "text": "The original principal casts of notable stage productions of Into the Woods.", "title": "Principal characters and casts" }, { "paragraph_id": 77, "text": "The musical has been adapted into a child-friendly version for use by schools and young companies, with the second act completely removed, as well as almost half the material from the first. The show is shortened from the original two and a half hours to fit in a 50-minute range, and the music transposed into keys that more easily fit young voices. It is licensed through Music Theatre International Broadway Junior musicals.The plot differences from the original with the story ending on a \"happy ending\"", "title": "Adaptations" }, { "paragraph_id": 78, "text": "In 2019, a similar adaptation, Into the Woods Sr., adapted for performance by senior citizens in community centers and nursing homes, premiered. It is available under license.", "title": "Adaptations" }, { "paragraph_id": 79, "text": "A theatrical film adaptation of the musical was produced by Walt Disney Pictures, directed by Rob Marshall, and starring Meryl Streep as the Witch, Emily Blunt as the Baker's Wife, James Corden as the Baker, Anna Kendrick as Cinderella, Chris Pine as Cinderella's Prince, Daniel Huttlestone as Jack, Lilla Crawford as Little Red Ridinghood, Tracey Ullman as Jack's Mother, Billy Magnussen as Rapunzel's Prince, Christine Baranski as Cinderella's Stepmother, MacKenzie Mauzy as Rapunzel, Tammy Blanchard as Florinda, and Johnny Depp as the Wolf. The film was released on December 25, 2014. It was a critical and commercial hit, grossing over $213 million worldwide. For her performance as the Witch, Streep was nominated for the Academy Award for Best Supporting Actress. The film also received Academy Award nominations for Best Production Design and Best Costume Design.", "title": "Adaptations" }, { "paragraph_id": 80, "text": "In most productions of Into the Woods, including the original Broadway production, several parts are doubled. Cinderella's Prince and the Wolf, who both cannot control their appetites, are usually played by the same actor. Similarly, so are the Narrator and the Mysterious Man, who both comment on the story while avoiding any personal involvement or responsibility. Granny and Cinderella's Mother, both matriarchal characters, are also typically played by the same person, who also gives voice to the nurturing but later murderous Giant's Wife.", "title": "Analysis of book and music" }, { "paragraph_id": 81, "text": "The show covers multiple themes: growing up, parents and children, accepting responsibility, morality, and finally, wish fulfillment and its consequences. Time Magazine's reviewers wrote that the play's \"basic insight... is at heart, most fairy tales are about the loving yet embattled relationship between parents and children. Almost everything that goes wrong—which is to say, almost everything that can—arises from a failure of parental or filial duty, despite the best intentions.\" Stephen Holden wrote that the show's themes include parent-child relationships and the individual's responsibility to the community. The witch isn't just a scowling old hag, but a key symbol of moral ambivalence. Lapine said that the most unpleasant person (the Witch) would have the truest things to say and the \"nicer\" people would be less honest. In the Witch's words: \"I'm not good; I'm not nice; I'm just right.\"", "title": "Analysis of book and music" }, { "paragraph_id": 82, "text": "Given the show's debut during the 1980s, the height of the U.S. AIDS crisis, the work has been interpreted as a parable about AIDS. In this interpretation, the Giant's Wife is a metaphor for HIV/AIDS, killing good and bad characters indiscriminately and forcing survivors to band together to stop the threat and move on from the devastation, reflecting the devastation AIDS wrought on many communities. When asked about the connection, Sondheim acknowledged that initial audiences interpreted it as an AIDS metaphor, but said that the work was not intended to be specific.", "title": "Analysis of book and music" }, { "paragraph_id": 83, "text": "The score is also notable in Sondheim's output because of its intricate reworking and development of small musical motifs. In particular, the opening words, \"I wish\", are set to the interval of a rising major second and this small unit is both repeated and developed throughout the show, just as Lapine's book explores the consequences of self-interest and \"wishing\". The dialogue is characterized by the heavy use of syncopated speech. In many instances, the characters' lines are delivered with a fixed beat that follows natural speech rhythms, but is also purposely composed in eighth, sixteenth, and quarter note rhythms as part of a spoken song. Like many Sondheim/Lapine productions, the songs contain thought-process narrative, where characters converse or think aloud.", "title": "Analysis of book and music" }, { "paragraph_id": 84, "text": "Sondheim drew on parts of his troubled childhood when writing the show. In 1987, he told Time Magazine that the \"father uncomfortable with babies [was] his father, and [the] mother who regrets having had children [was] his mother.\"", "title": "Analysis of book and music" } ]
Into the Woods is a 1987 musical with music and lyrics by Stephen Sondheim and book by James Lapine. The musical intertwines the plots of several Brothers Grimm fairy tales, exploring the consequences of the characters' wishes and quests. The main characters are taken from "Little Red Riding Hood", "Jack and the Beanstalk", "Rapunzel", "Cinderella", and several others. The musical is tied together by a story involving a childless baker and his wife and their quest to begin a family, their interaction with a witch who has placed a curse on them, and their interaction with other storybook characters during their journey. The second collaboration between Sondheim and Lapine after Sunday in the Park with George (1984), Into the Woods debuted in San Diego at the Old Globe Theatre in 1986 and premiered on Broadway on November 5, 1987, where it won three major Tony Awards, in a year dominated by The Phantom of the Opera. The musical has since been produced many times, with a 1988 U.S. national tour, a 1990 West End production, a 1997 10th-anniversary concert, a 2002 Broadway revival, a 2010 London revival, and a 2012 outdoor Shakespeare in the Park production in New York City. A Disney film adaptation, directed by Rob Marshall, was released in 2014. The film grossed over $213 million worldwide, and received three nominations at both the Academy Awards and the Golden Globe Awards. A second Broadway revival began performances on June 28, 2022, at the St. James Theatre, and opened on July 10. The production closed on January 8, 2023, and began touring the U.S. on February 18 of the same year.
2002-02-25T15:51:15Z
2024-01-01T01:06:28Z
[ "Template:Cinderella", "Template:Short description", "Template:Col-begin", "Template:Main", "Template:Nominated", "Template:Cite news", "Template:Citation", "Template:Nom", "Template:Won", "Template:Navboxes", "Template:Rapunzel", "Template:Stephen Sondheim", "Template:Dead link", "Template:Wikiquote", "Template:About", "Template:Infobox musical", "Template:Col-2", "Template:Cite web", "Template:Webarchive", "Template:ISBN", "Template:Ibdb show", "Template:Authority control", "Template:Col-end", "Template:Reflist", "Template:Cite magazine", "Template:Jack" ]
https://en.wikipedia.org/wiki/Into_the_Woods
15,342
Isaac Klein
Isaac Klein (September 5, 1905 – January 23, 1979) was a prominent rabbi and halakhic authority within Conservative Judaism. Klein was born in the small village of Várpalánka, today part of Mukachevo, in what was then Hungary. He emigrated with his family to the United States in 1921. He earned a BA from City College of New York in 1931. Although nearing ordination at the Yeshiva University's Rabbi Isaac Elchanan Theological Seminary, he transferred to the Jewish Theological Seminary of America (JTSA), where he was ordained in 1934 and received the advanced Jewish legal degree of Hattarat Hora’ah under the great talmudic scholar Rabbi Professor Louis Ginzberg. He was one of only three people, along with Boaz Cohen and Louis Finkelstein, to ever to receive this degree from JTSA. Klein subsequently earned a PhD from Harvard under the pioneering academic of Judaic studies Harry Wolfson. He married the former Henriette Levine in 1932 and had three daughters, Hannah, Miriam, and Rivke. Devoted to his family, he dedicated his major work, A Guide to Jewish Religious Practice to his children, sons-in-law and 13 grandchildren listing each by name. Klein served as rabbi at Kadimoh Congregation in Springfield, Massachusetts from 1934 to 1953; Temple Emanu-El, Buffalo, New York, 1953–1968; Temple Shaarey Zedek, Buffalo, (which was created from the merger of Emanu-El with Temple Beth David in 1968), 1968–1972. A beloved Rabbi, he influenced generations of congregants and visiting students and, together with his wife who was an educator, founded Jewish day schools in both Springfield and Buffalo. Despite the difficulties facing a congregational Rabbi raising a family, Klein volunteered for the U.S. Army during World War II as a chaplain, motivated by a cause he saw as clearly right with important implications for the Jewish People. He served over 4 years, rising to the rank of Major and was an advisor to the high commissioner of the Occupation government. He also served on special assignments for Jewish soldiers in the U.S. Army in the 1950s, receiving the simulated rank of Brigadier General for these missions. His experiences in the war are described in his book The Anguish and the Ecstasy of a Jewish Chaplain. Klein was a leader of the right-wing of the Conservative movement. He was president of the Rabbinical Assembly, 1958–1960, and a member of its Committee on Jewish Law and Standards, 1948–1979. He was the author of several books, notably, A Guide to Jewish Religious Practice. One of the outstanding halakhists of the movement, he served as a leading member of the Committee on Jewish Law and Standards from 1948 until his death in 1979. As a leading authority on halakha he authored many important teshuvot (responsa), many of which were published in his influential "Responsa and Halakhic Studies". From the 1950s to 1970s, he wrote a comprehensive guide to Jewish law that was used to teach halakha at the JTSA. In 1979 he assembled this into A Guide to Jewish Religious Practice, which is used widely by laypeople and rabbis within Conservative Judaism. The philosophy upon which A Guide to Jewish Religious Practice is written is stated in the foreword: "The premise on which Torah is based is that all aspects of life - leisure no less than business, worship or rites of passage (birth, bar mitzvah, marriage, divorce, death) - are part of the covenant and mandate under which every Jew is to serve God in everything he does. In the eyes of Torah there is, strictly speaking, no such thing as the purely private domain, for even in solitude - be it the privacy of the bath or the unconsciousness of sleep - one has the capacity and the duty to serve God." This message, of life seen in consonance with the dictates of Judaism, permeates many pages of the book. Rabbi Louis Finkelstein, scholar of the JTSA, wrote: "There are those who would think that we have but two alternatives, to reject or to accept the law, but in either case to treat it as a dead letter. Both of these alternatives are repugnant to the whole tradition of Judaism. Jewish law must be preserved but it is subject to interpretation by those who have mastered it, and the interpretation placed upon it by duly authorized masters in every generation must be accepted with as much reverence as those which were given in previous generations." This understanding of traditional preservation of the law through its continuous interpretation lies at the heart of Klein's extensive study of Jewish law. Klein's papers are located at the University Archives, State University of New York at Buffalo (see finding aid). The archives include fourteen reels of microfilm. The collection consists of extensive writings by Klein on traditional Jewish practice and law. This includes manuscript material for his books Guide to Jewish Religious Practice (1979), The Ten Commandments in a Changing World (1963), The Anguish and the Ecstasy of a Jewish Chaplain (1974), and his translation of The Code of Maimonides (Mishneh Torah): Book 7, The Book of Agriculture (1979). The collection also contains speeches, sermons, articles, and remarks from the Conservative Jewish viewpoint on subjects such as Jewish medical ethics, dietary laws, adoption, and marriage and divorce. Meeting minutes, annual reports, bulletins, and sermons relating to Klein's rabbinical vocations in Springfield, Massachusetts and Buffalo, New York are also included. The papers contain photographs, wartime letters, and military records of Klein documenting his service in World War II as a director of Jewish religious affairs in Germany.
[ { "paragraph_id": 0, "text": "Isaac Klein (September 5, 1905 – January 23, 1979) was a prominent rabbi and halakhic authority within Conservative Judaism.", "title": "" }, { "paragraph_id": 1, "text": "Klein was born in the small village of Várpalánka, today part of Mukachevo, in what was then Hungary. He emigrated with his family to the United States in 1921. He earned a BA from City College of New York in 1931. Although nearing ordination at the Yeshiva University's Rabbi Isaac Elchanan Theological Seminary, he transferred to the Jewish Theological Seminary of America (JTSA), where he was ordained in 1934 and received the advanced Jewish legal degree of Hattarat Hora’ah under the great talmudic scholar Rabbi Professor Louis Ginzberg. He was one of only three people, along with Boaz Cohen and Louis Finkelstein, to ever to receive this degree from JTSA. Klein subsequently earned a PhD from Harvard under the pioneering academic of Judaic studies Harry Wolfson.", "title": "Personal life, education, and career" }, { "paragraph_id": 2, "text": "He married the former Henriette Levine in 1932 and had three daughters, Hannah, Miriam, and Rivke. Devoted to his family, he dedicated his major work, A Guide to Jewish Religious Practice to his children, sons-in-law and 13 grandchildren listing each by name.", "title": "Personal life, education, and career" }, { "paragraph_id": 3, "text": "Klein served as rabbi at Kadimoh Congregation in Springfield, Massachusetts from 1934 to 1953; Temple Emanu-El, Buffalo, New York, 1953–1968; Temple Shaarey Zedek, Buffalo, (which was created from the merger of Emanu-El with Temple Beth David in 1968), 1968–1972. A beloved Rabbi, he influenced generations of congregants and visiting students and, together with his wife who was an educator, founded Jewish day schools in both Springfield and Buffalo.", "title": "Personal life, education, and career" }, { "paragraph_id": 4, "text": "Despite the difficulties facing a congregational Rabbi raising a family, Klein volunteered for the U.S. Army during World War II as a chaplain, motivated by a cause he saw as clearly right with important implications for the Jewish People. He served over 4 years, rising to the rank of Major and was an advisor to the high commissioner of the Occupation government. He also served on special assignments for Jewish soldiers in the U.S. Army in the 1950s, receiving the simulated rank of Brigadier General for these missions. His experiences in the war are described in his book The Anguish and the Ecstasy of a Jewish Chaplain.", "title": "Personal life, education, and career" }, { "paragraph_id": 5, "text": "Klein was a leader of the right-wing of the Conservative movement. He was president of the Rabbinical Assembly, 1958–1960, and a member of its Committee on Jewish Law and Standards, 1948–1979. He was the author of several books, notably, A Guide to Jewish Religious Practice. One of the outstanding halakhists of the movement, he served as a leading member of the Committee on Jewish Law and Standards from 1948 until his death in 1979.", "title": "Role within Conservative Judaism" }, { "paragraph_id": 6, "text": "As a leading authority on halakha he authored many important teshuvot (responsa), many of which were published in his influential \"Responsa and Halakhic Studies\". From the 1950s to 1970s, he wrote a comprehensive guide to Jewish law that was used to teach halakha at the JTSA. In 1979 he assembled this into A Guide to Jewish Religious Practice, which is used widely by laypeople and rabbis within Conservative Judaism.", "title": "Role within Conservative Judaism" }, { "paragraph_id": 7, "text": "The philosophy upon which A Guide to Jewish Religious Practice is written is stated in the foreword: \"The premise on which Torah is based is that all aspects of life - leisure no less than business, worship or rites of passage (birth, bar mitzvah, marriage, divorce, death) - are part of the covenant and mandate under which every Jew is to serve God in everything he does. In the eyes of Torah there is, strictly speaking, no such thing as the purely private domain, for even in solitude - be it the privacy of the bath or the unconsciousness of sleep - one has the capacity and the duty to serve God.\" This message, of life seen in consonance with the dictates of Judaism, permeates many pages of the book. Rabbi Louis Finkelstein, scholar of the JTSA, wrote: \"There are those who would think that we have but two alternatives, to reject or to accept the law, but in either case to treat it as a dead letter. Both of these alternatives are repugnant to the whole tradition of Judaism. Jewish law must be preserved but it is subject to interpretation by those who have mastered it, and the interpretation placed upon it by duly authorized masters in every generation must be accepted with as much reverence as those which were given in previous generations.\"", "title": "Rabbinic thought" }, { "paragraph_id": 8, "text": "This understanding of traditional preservation of the law through its continuous interpretation lies at the heart of Klein's extensive study of Jewish law.", "title": "Rabbinic thought" }, { "paragraph_id": 9, "text": "Klein's papers are located at the University Archives, State University of New York at Buffalo (see finding aid). The archives include fourteen reels of microfilm. The collection consists of extensive writings by Klein on traditional Jewish practice and law. This includes manuscript material for his books Guide to Jewish Religious Practice (1979), The Ten Commandments in a Changing World (1963), The Anguish and the Ecstasy of a Jewish Chaplain (1974), and his translation of The Code of Maimonides (Mishneh Torah): Book 7, The Book of Agriculture (1979). The collection also contains speeches, sermons, articles, and remarks from the Conservative Jewish viewpoint on subjects such as Jewish medical ethics, dietary laws, adoption, and marriage and divorce. Meeting minutes, annual reports, bulletins, and sermons relating to Klein's rabbinical vocations in Springfield, Massachusetts and Buffalo, New York are also included. The papers contain photographs, wartime letters, and military records of Klein documenting his service in World War II as a director of Jewish religious affairs in Germany.", "title": "Rabbinic thought" } ]
Isaac Klein was a prominent rabbi and halakhic authority within Conservative Judaism.
2023-02-07T20:14:36Z
[ "Template:Citation needed", "Template:Webarchive", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Isaac_Klein
15,343
Intron
An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intragenic region, i.e., a region inside a gene. The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts. The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons. Introns are found in the genes of most organisms and many viruses and they can be located in protein-coding genes, genes that function as RNA (noncoding genes), and some pseudogenes. There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns (see below). Introns are rare in Bacteria and Archaea (prokaryotes), but most eukaryotic genes contain multiple spliceosomal introns. Introns were first discovered in protein-coding genes of adenovirus, and were subsequently identified in genes encoding transfer RNA and ribosomal RNA genes. Introns are now known to occur within a wide variety of genes throughout organisms, bacteria, and viruses within all of the biological kingdoms. The fact that genes were split or interrupted by introns was discovered independently in 1977 by Phillip Allen Sharp and Richard J. Roberts, for which they shared the Nobel Prize in Physiology or Medicine in 1993. The term intron was introduced by American biochemist Walter Gilbert: "The notion of the cistron [i.e., gene] ... must be replaced by that of a transcription unit containing regions which will be lost from the mature messenger – which I suggest we call introns (for intragenic regions) – alternating with regions which will be expressed – exons." (Gilbert 1978) The term intron also refers to intracistron, i.e., an additional piece of DNA that arises within a cistron. Although introns are sometimes called intervening sequences, the term "intervening sequence" can refer to any of several families of internal nucleic acid sequences that are not present in the final gene product, including inteins, untranslated regions (UTR), and nucleotides removed by RNA editing, in addition to introns. The frequency of introns within different genomes is observed to vary widely across the spectrum of biological organisms. For example, introns are extremely common within the nuclear genome of jawed vertebrates (e.g. humans, mice, and pufferfish (fugu)), where protein-coding genes almost always contain multiple introns, while introns are rare within the nuclear genes of some eukaryotic microorganisms, for example baker's/brewer's yeast (Saccharomyces cerevisiae). In contrast, the mitochondrial genomes of vertebrates are entirely devoid of introns, while those of eukaryotic microorganisms may contain many introns. A particularly extreme case is the Drosophila dhc7 gene containing a ≥3.6 megabase (Mb) intron, which takes roughly three days to transcribe. On the other extreme, a 2015 study suggests that the shortest known metazoan intron length is 30 base pairs (bp) belonging to the human MST1L gene. The shortest known introns belong to the heterotrich ciliates, such as Stentor coeruleus, in which most (> 95%) introns are 15 or 16 bp long. Splicing of all intron-containing RNA molecules is superficially similar, as described above. However, different types of introns were identified through the examination of intron structure by DNA sequence analysis, together with genetic and biochemical analysis of RNA splicing reactions. At least four distinct classes of introns have been identified: Group III introns are proposed to be a fifth family, but little is known about the biochemical apparatus that mediates their splicing. They appear to be related to group II introns, and possibly to spliceosomal introns. Nuclear pre-mRNA introns (spliceosomal introns) are characterized by specific intron sequences located at the boundaries between introns and exons. These sequences are recognized by spliceosomal RNA molecules when the splicing reactions are initiated. In addition, they contain a branch point, a particular nucleotide sequence near the 3' end of the intron that becomes covalently linked to the 5' end of the intron during the splicing process, generating a branched (lariat) intron. Apart from these three short conserved elements, nuclear pre-mRNA intron sequences are highly variable. Nuclear pre-mRNA introns are often much longer than their surrounding exons. Transfer RNA introns that depend upon proteins for removal occur at a specific location within the anticodon loop of unspliced tRNA precursors, and are removed by a tRNA splicing endonuclease. The exons are then linked together by a second protein, the tRNA splicing ligase. Note that self-splicing introns are also sometimes found within tRNA genes. Group I and group II introns are found in genes encoding proteins (messenger RNA), transfer RNA and ribosomal RNA in a very wide range of living organisms. Following transcription into RNA, group I and group II introns also make extensive internal interactions that allow them to fold into a specific, complex three-dimensional architecture. These complex architectures allow some group I and group II introns to be self-splicing, that is, the intron-containing RNA molecule can rearrange its own covalent structure so as to precisely remove the intron and link the exons together in the correct order. In some cases, particular intron-binding proteins are involved in splicing, acting in such a way that they assist the intron in folding into the three-dimensional structure that is necessary for self-splicing activity. Group I and group II introns are distinguished by different sets of internal conserved sequences and folded structures, and by the fact that splicing of RNA molecules containing group II introns generates branched introns (like those of spliceosomal RNAs), while group I introns use a non-encoded guanosine nucleotide (typically GTP) to initiate splicing, adding it on to the 5'-end of the excised intron. The spliceosome is a very complex structure containing up to one hundred proteins and five different RNAs. The substrate of the reaction is a long RNA molecule and the transesterification reactions catalyzed by the spliceosome require the bringing together of sites that may be thousands of nucleotides apart. All biochemical reactions are associated with known error rates and the more complicated the reaction the higher the error rate. Therefore, it is not surprising that the splicing reaction catalyzed by the spliceosome has a significant error rate even though there are spliceosome accessory factors that suppress the accidental cleavage of cryptic splice sites. Under ideal circumstances, the splicing reaction is likely to be 99.999% accurate (error rate of 10) and the correct exons will be joined and the correct intron will be deleted. However, these ideal conditions require very close matches to the best splice site sequences and the absence of any competing cryptic splice site sequences within the introns and those conditions are rarely met in large eukaryotic genes that may cover more than 40 kilobase pairs. Recent studies have shown that the actual error rate can be considerably higher than 10 and may be as high as 2% or 3% errors (error rate of 2 or 3 x 10) per gene. Additional studies suggest that the error rate is no less than 0.1% per intron. This relatively high level of splicing errors explains why most splice variants are rapidly degraded by nonsense-mediated decay. The presence of sloppy binding sites within genes causes splicing errors and it may seem strange that these sites haven't been eliminated by natural selection. The argument for their persistence is similar to the argument for junk DNA. Although mutations which create or disrupt binding sites may be slightly deleterious, the large number of possible such mutations makes it inevitable that some will reach fixation in a population. This is particularly relevant in species, such as humans, with relatively small long-term effective population sizes. It is plausible, then, that the human genome carries a substantial load of suboptimal sequences which cause the generation of aberrant transcript isoforms. In this study, we present direct evidence that this is indeed the case. While the catalytic reaction may be accurate enough for effective processing most of the time, the overall error rate may be partly limited by the fidelity of transcription because transcription errors will introduce mutations that create cryptic splice sites. In addition, the transcription error rate of 10 – 10 is high enough that one in every 25,000 transcribed exons will have an incorporation error in one of the splice sites leading to a skipped intron or a skipped exon. Almost all multi-exon genes will produce incorrectly spliced transcripts but the frequency of this background noise will depend on the size of the genes, the number of introns, and the quality of the splice site sequences. In some cases, splice variants will be produced by mutations in the gene (DNA). These can be SNP polymorphisms that create a cryptic splice site or mutate a functional site. They can also be somatic cell mutations that affect splicing in a particular tissue or a cell line. When the mutant allele is in a heterozygous state this will result in production of two abundant splice variants; one functional and one non-functional. In the homozygous state the mutant alleles may cause a genetic disease such as the hemophilia found in descendants of Queen Victoria where a mutation in one of the introns in a blood clotting factor gene creates a cryptic 3' splice site resulting in aberrant splicing. A significant fraction of human deaths by disease may be caused by mutations that interfere with normal splicing; mostly by creating cryptic splice sites. Incorrectly spliced transcripts can easily be detected and their sequences entered into the online databases. They are usually described as "alternatively spliced" transcripts, which can be confusing because the term does not distinguish between real, biologically relevant, alternative splicing and processing noise due to splicing errors. One of the central issues in the field of alternative splicing is working out the differences between these two possibilities. Many scientists have argued that the null hypothesis should be splicing noise, putting the burden of proof on those who claim biologically relevant alternative splicing. According to those scientists, the claim of function must be accompanied by convincing evidence that multiple functional products are produced from the same gene. While introns do not encode protein products, they are integral to gene expression regulation. Some introns themselves encode functional RNAs through further processing after splicing to generate noncoding RNA molecules. Alternative splicing is widely used to generate multiple proteins from a single gene. Furthermore, some introns play essential roles in a wide range of gene expression regulatory functions such as nonsense-mediated decay and mRNA export. After the initial discovery of introns in protein-coding genes of the eukaryotic nucleus, there was significant debate as to whether introns in modern-day organisms were inherited from a common ancient ancestor (termed the introns-early hypothesis), or whether they appeared in genes rather recently in the evolutionary process (termed the introns-late hypothesis). Another theory is that the spliceosome and the intron-exon structure of genes is a relic of the RNA world (the introns-first hypothesis). There is still considerable debate about the extent to which of these hypotheses is most correct but the popular consensus at the moment is that following the formation of the first eukaryotic cell, group II introns from the bacterial endosymbiont invaded the host genome. In the beginning these self-splicing introns excised themselves from the mRNA precursor but over time some of them lost that ability and their excision had to be aided in trans by other group II introns. Eventually a number of specific trans-acting introns evolved and these became the precursors to the snRNAs of the spliceosome. The efficiency of splicing was improved by association with stabilizing proteins to form the primitive spliceosome. Early studies of genomic DNA sequences from a wide range of organisms show that the intron-exon structure of homologous genes in different organisms can vary widely. More recent studies of entire eukaryotic genomes have now shown that the lengths and density (introns/gene) of introns varies considerably between related species. For example, while the human genome contains an average of 8.4 introns/gene (139,418 in the genome), the unicellular fungus Encephalitozoon cuniculi contains only 0.0075 introns/gene (15 introns in the genome). Since eukaryotes arose from a common ancestor (common descent), there must have been extensive gain or loss of introns during evolutionary time. This process is thought to be subject to selection, with a tendency towards intron gain in larger species due to their smaller population sizes, and the converse in smaller (particularly unicellular) species. Biological factors also influence which genes in a genome lose or accumulate introns. Alternative splicing of exons within a gene after intron excision acts to introduce greater variability of protein sequences translated from a single gene, allowing multiple related proteins to be generated from a single gene and a single precursor mRNA transcript. The control of alternative RNA splicing is performed by a complex network of signaling molecules that respond to a wide range of intracellular and extracellular signals. Introns contain several short sequences that are important for efficient splicing, such as acceptor and donor sites at either end of the intron as well as a branch point site, which are required for proper splicing by the spliceosome. Some introns are known to enhance the expression of the gene that they are contained in by a process known as intron-mediated enhancement (IME). Actively transcribed regions of DNA frequently form R-loops that are vulnerable to DNA damage. In highly expressed yeast genes, introns inhibit R-loop formation and the occurrence of DNA damage. Genome-wide analysis in both yeast and humans revealed that intron-containing genes have decreased R-loop levels and decreased DNA damage compared to intronless genes of similar expression. Insertion of an intron within an R-loop prone gene can also suppress R-loop formation and recombination. Bonnet et al. (2017) speculated that the function of introns in maintaining genetic stability may explain their evolutionary maintenance at certain locations, particularly in highly expressed genes. The physical presence of introns promotes cellular resistance to starvation via intron enhanced repression of ribosomal protein genes of nutrient-sensing pathways. Introns may be lost or gained over evolutionary time, as shown by many comparative studies of orthologous genes. Subsequent analyses have identified thousands of examples of intron loss and gain events, and it has been proposed that the emergence of eukaryotes, or the initial stages of eukaryotic evolution, involved an intron invasion. Two definitive mechanisms of intron loss, reverse transcriptase-mediated intron loss (RTMIL) and genomic deletions, have been identified, and are known to occur. The definitive mechanisms of intron gain, however, remain elusive and controversial. At least seven mechanisms of intron gain have been reported thus far: intron transposition, transposon insertion, tandem genomic duplication, intron transfer, intron gain during double-strand break repair (DSBR), insertion of a group II intron, and intronization. In theory it should be easiest to deduce the origin of recently gained introns due to the lack of host-induced mutations, yet even introns gained recently did not arise from any of the aforementioned mechanisms. These findings thus raise the question of whether or not the proposed mechanisms of intron gain fail to describe the mechanistic origin of many novel introns because they are not accurate mechanisms of intron gain, or if there are other, yet to be discovered, processes generating novel introns. In intron transposition, the most commonly purported intron gain mechanism, a spliced intron is thought to reverse splice into either its own mRNA or another mRNA at a previously intron-less position. This intron-containing mRNA is then reverse transcribed and the resulting intron-containing cDNA may then cause intron gain via complete or partial recombination with its original genomic locus. Transposon insertions can also result in intron creation. Such an insertion could intronize the transposon without disrupting the coding sequence when a transposon inserts into the sequence AGGT, resulting in the duplication of this sequence on each side of the transposon. It is not yet understood why these elements are spliced, whether by chance, or by some preferential action by the transposon. In tandem genomic duplication, due to the similarity between consensus donor and acceptor splice sites, which both closely resemble AGGT, the tandem genomic duplication of an exonic segment harboring an AGGT sequence generates two potential splice sites. When recognized by the spliceosome, the sequence between the original and duplicated AGGT will be spliced, resulting in the creation of an intron without alteration of the coding sequence of the gene. Double-stranded break repair via non-homologous end joining was recently identified as a source of intron gain when researchers identified short direct repeats flanking 43% of gained introns in Daphnia. These numbers must be compared to the number of conserved introns flanked by repeats in other organisms, though, for statistical relevance. For group II intron insertion, the retrohoming of a group II intron into a nuclear gene was proposed to cause recent spliceosomal intron gain. Intron transfer has been hypothesized to result in intron gain when a paralog or pseudogene gains an intron and then transfers this intron via recombination to an intron-absent location in its sister paralog. Intronization is the process by which mutations create novel introns from formerly exonic sequence. Thus, unlike other proposed mechanisms of intron gain, this mechanism does not require the insertion or generation of DNA to create a novel intron. The only hypothesized mechanism of recent intron gain lacking any direct evidence is that of group II intron insertion, which when demonstrated in vivo, abolishes gene expression. Group II introns are therefore likely the presumed ancestors of spliceosomal introns, acting as site-specific retroelements, and are no longer responsible for intron gain. Tandem genomic duplication is the only proposed mechanism with supporting in vivo experimental evidence: a short intragenic tandem duplication can insert a novel intron into a protein-coding gene, leaving the corresponding peptide sequence unchanged. This mechanism also has extensive indirect evidence lending support to the idea that tandem genomic duplication is a prevalent mechanism for intron gain. The testing of other proposed mechanisms in vivo, particularly intron gain during DSBR, intron transfer, and intronization, is possible, although these mechanisms must be demonstrated in vivo to solidify them as actual mechanisms of intron gain. Further genomic analyses, especially when executed at the population level, may then quantify the relative contribution of each mechanism, possibly identifying species-specific biases that may shed light on varied rates of intron gain amongst different species. Structure: Splicing: Function Others:
[ { "paragraph_id": 0, "text": "An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intragenic region, i.e., a region inside a gene. The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts. The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons.", "title": "" }, { "paragraph_id": 1, "text": "Introns are found in the genes of most organisms and many viruses and they can be located in protein-coding genes, genes that function as RNA (noncoding genes), and some pseudogenes. There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns (see below). Introns are rare in Bacteria and Archaea (prokaryotes), but most eukaryotic genes contain multiple spliceosomal introns.", "title": "" }, { "paragraph_id": 2, "text": "Introns were first discovered in protein-coding genes of adenovirus, and were subsequently identified in genes encoding transfer RNA and ribosomal RNA genes. Introns are now known to occur within a wide variety of genes throughout organisms, bacteria, and viruses within all of the biological kingdoms.", "title": "Discovery and etymology" }, { "paragraph_id": 3, "text": "The fact that genes were split or interrupted by introns was discovered independently in 1977 by Phillip Allen Sharp and Richard J. Roberts, for which they shared the Nobel Prize in Physiology or Medicine in 1993. The term intron was introduced by American biochemist Walter Gilbert:", "title": "Discovery and etymology" }, { "paragraph_id": 4, "text": "\"The notion of the cistron [i.e., gene] ... must be replaced by that of a transcription unit containing regions which will be lost from the mature messenger – which I suggest we call introns (for intragenic regions) – alternating with regions which will be expressed – exons.\" (Gilbert 1978)", "title": "Discovery and etymology" }, { "paragraph_id": 5, "text": "The term intron also refers to intracistron, i.e., an additional piece of DNA that arises within a cistron.", "title": "Discovery and etymology" }, { "paragraph_id": 6, "text": "Although introns are sometimes called intervening sequences, the term \"intervening sequence\" can refer to any of several families of internal nucleic acid sequences that are not present in the final gene product, including inteins, untranslated regions (UTR), and nucleotides removed by RNA editing, in addition to introns.", "title": "Discovery and etymology" }, { "paragraph_id": 7, "text": "The frequency of introns within different genomes is observed to vary widely across the spectrum of biological organisms. For example, introns are extremely common within the nuclear genome of jawed vertebrates (e.g. humans, mice, and pufferfish (fugu)), where protein-coding genes almost always contain multiple introns, while introns are rare within the nuclear genes of some eukaryotic microorganisms, for example baker's/brewer's yeast (Saccharomyces cerevisiae). In contrast, the mitochondrial genomes of vertebrates are entirely devoid of introns, while those of eukaryotic microorganisms may contain many introns.", "title": "Distribution" }, { "paragraph_id": 8, "text": "A particularly extreme case is the Drosophila dhc7 gene containing a ≥3.6 megabase (Mb) intron, which takes roughly three days to transcribe. On the other extreme, a 2015 study suggests that the shortest known metazoan intron length is 30 base pairs (bp) belonging to the human MST1L gene. The shortest known introns belong to the heterotrich ciliates, such as Stentor coeruleus, in which most (> 95%) introns are 15 or 16 bp long.", "title": "Distribution" }, { "paragraph_id": 9, "text": "Splicing of all intron-containing RNA molecules is superficially similar, as described above. However, different types of introns were identified through the examination of intron structure by DNA sequence analysis, together with genetic and biochemical analysis of RNA splicing reactions. At least four distinct classes of introns have been identified:", "title": "Classification" }, { "paragraph_id": 10, "text": "Group III introns are proposed to be a fifth family, but little is known about the biochemical apparatus that mediates their splicing. They appear to be related to group II introns, and possibly to spliceosomal introns.", "title": "Classification" }, { "paragraph_id": 11, "text": "Nuclear pre-mRNA introns (spliceosomal introns) are characterized by specific intron sequences located at the boundaries between introns and exons. These sequences are recognized by spliceosomal RNA molecules when the splicing reactions are initiated. In addition, they contain a branch point, a particular nucleotide sequence near the 3' end of the intron that becomes covalently linked to the 5' end of the intron during the splicing process, generating a branched (lariat) intron. Apart from these three short conserved elements, nuclear pre-mRNA intron sequences are highly variable. Nuclear pre-mRNA introns are often much longer than their surrounding exons.", "title": "Classification" }, { "paragraph_id": 12, "text": "Transfer RNA introns that depend upon proteins for removal occur at a specific location within the anticodon loop of unspliced tRNA precursors, and are removed by a tRNA splicing endonuclease. The exons are then linked together by a second protein, the tRNA splicing ligase. Note that self-splicing introns are also sometimes found within tRNA genes.", "title": "Classification" }, { "paragraph_id": 13, "text": "Group I and group II introns are found in genes encoding proteins (messenger RNA), transfer RNA and ribosomal RNA in a very wide range of living organisms. Following transcription into RNA, group I and group II introns also make extensive internal interactions that allow them to fold into a specific, complex three-dimensional architecture. These complex architectures allow some group I and group II introns to be self-splicing, that is, the intron-containing RNA molecule can rearrange its own covalent structure so as to precisely remove the intron and link the exons together in the correct order. In some cases, particular intron-binding proteins are involved in splicing, acting in such a way that they assist the intron in folding into the three-dimensional structure that is necessary for self-splicing activity. Group I and group II introns are distinguished by different sets of internal conserved sequences and folded structures, and by the fact that splicing of RNA molecules containing group II introns generates branched introns (like those of spliceosomal RNAs), while group I introns use a non-encoded guanosine nucleotide (typically GTP) to initiate splicing, adding it on to the 5'-end of the excised intron.", "title": "Classification" }, { "paragraph_id": 14, "text": "The spliceosome is a very complex structure containing up to one hundred proteins and five different RNAs. The substrate of the reaction is a long RNA molecule and the transesterification reactions catalyzed by the spliceosome require the bringing together of sites that may be thousands of nucleotides apart. All biochemical reactions are associated with known error rates and the more complicated the reaction the higher the error rate. Therefore, it is not surprising that the splicing reaction catalyzed by the spliceosome has a significant error rate even though there are spliceosome accessory factors that suppress the accidental cleavage of cryptic splice sites.", "title": "On the accuracy of splicing" }, { "paragraph_id": 15, "text": "Under ideal circumstances, the splicing reaction is likely to be 99.999% accurate (error rate of 10) and the correct exons will be joined and the correct intron will be deleted. However, these ideal conditions require very close matches to the best splice site sequences and the absence of any competing cryptic splice site sequences within the introns and those conditions are rarely met in large eukaryotic genes that may cover more than 40 kilobase pairs. Recent studies have shown that the actual error rate can be considerably higher than 10 and may be as high as 2% or 3% errors (error rate of 2 or 3 x 10) per gene. Additional studies suggest that the error rate is no less than 0.1% per intron. This relatively high level of splicing errors explains why most splice variants are rapidly degraded by nonsense-mediated decay.", "title": "On the accuracy of splicing" }, { "paragraph_id": 16, "text": "The presence of sloppy binding sites within genes causes splicing errors and it may seem strange that these sites haven't been eliminated by natural selection. The argument for their persistence is similar to the argument for junk DNA.", "title": "On the accuracy of splicing" }, { "paragraph_id": 17, "text": "Although mutations which create or disrupt binding sites may be slightly deleterious, the large number of possible such mutations makes it inevitable that some will reach fixation in a population. This is particularly relevant in species, such as humans, with relatively small long-term effective population sizes. It is plausible, then, that the human genome carries a substantial load of suboptimal sequences which cause the generation of aberrant transcript isoforms. In this study, we present direct evidence that this is indeed the case.", "title": "On the accuracy of splicing" }, { "paragraph_id": 18, "text": "While the catalytic reaction may be accurate enough for effective processing most of the time, the overall error rate may be partly limited by the fidelity of transcription because transcription errors will introduce mutations that create cryptic splice sites. In addition, the transcription error rate of 10 – 10 is high enough that one in every 25,000 transcribed exons will have an incorporation error in one of the splice sites leading to a skipped intron or a skipped exon. Almost all multi-exon genes will produce incorrectly spliced transcripts but the frequency of this background noise will depend on the size of the genes, the number of introns, and the quality of the splice site sequences.", "title": "On the accuracy of splicing" }, { "paragraph_id": 19, "text": "In some cases, splice variants will be produced by mutations in the gene (DNA). These can be SNP polymorphisms that create a cryptic splice site or mutate a functional site. They can also be somatic cell mutations that affect splicing in a particular tissue or a cell line. When the mutant allele is in a heterozygous state this will result in production of two abundant splice variants; one functional and one non-functional. In the homozygous state the mutant alleles may cause a genetic disease such as the hemophilia found in descendants of Queen Victoria where a mutation in one of the introns in a blood clotting factor gene creates a cryptic 3' splice site resulting in aberrant splicing. A significant fraction of human deaths by disease may be caused by mutations that interfere with normal splicing; mostly by creating cryptic splice sites.", "title": "On the accuracy of splicing" }, { "paragraph_id": 20, "text": "Incorrectly spliced transcripts can easily be detected and their sequences entered into the online databases. They are usually described as \"alternatively spliced\" transcripts, which can be confusing because the term does not distinguish between real, biologically relevant, alternative splicing and processing noise due to splicing errors. One of the central issues in the field of alternative splicing is working out the differences between these two possibilities. Many scientists have argued that the null hypothesis should be splicing noise, putting the burden of proof on those who claim biologically relevant alternative splicing. According to those scientists, the claim of function must be accompanied by convincing evidence that multiple functional products are produced from the same gene.", "title": "On the accuracy of splicing" }, { "paragraph_id": 21, "text": "While introns do not encode protein products, they are integral to gene expression regulation. Some introns themselves encode functional RNAs through further processing after splicing to generate noncoding RNA molecules. Alternative splicing is widely used to generate multiple proteins from a single gene. Furthermore, some introns play essential roles in a wide range of gene expression regulatory functions such as nonsense-mediated decay and mRNA export.", "title": "Biological functions and evolution" }, { "paragraph_id": 22, "text": "After the initial discovery of introns in protein-coding genes of the eukaryotic nucleus, there was significant debate as to whether introns in modern-day organisms were inherited from a common ancient ancestor (termed the introns-early hypothesis), or whether they appeared in genes rather recently in the evolutionary process (termed the introns-late hypothesis). Another theory is that the spliceosome and the intron-exon structure of genes is a relic of the RNA world (the introns-first hypothesis). There is still considerable debate about the extent to which of these hypotheses is most correct but the popular consensus at the moment is that following the formation of the first eukaryotic cell, group II introns from the bacterial endosymbiont invaded the host genome. In the beginning these self-splicing introns excised themselves from the mRNA precursor but over time some of them lost that ability and their excision had to be aided in trans by other group II introns. Eventually a number of specific trans-acting introns evolved and these became the precursors to the snRNAs of the spliceosome. The efficiency of splicing was improved by association with stabilizing proteins to form the primitive spliceosome.", "title": "Biological functions and evolution" }, { "paragraph_id": 23, "text": "Early studies of genomic DNA sequences from a wide range of organisms show that the intron-exon structure of homologous genes in different organisms can vary widely. More recent studies of entire eukaryotic genomes have now shown that the lengths and density (introns/gene) of introns varies considerably between related species. For example, while the human genome contains an average of 8.4 introns/gene (139,418 in the genome), the unicellular fungus Encephalitozoon cuniculi contains only 0.0075 introns/gene (15 introns in the genome). Since eukaryotes arose from a common ancestor (common descent), there must have been extensive gain or loss of introns during evolutionary time. This process is thought to be subject to selection, with a tendency towards intron gain in larger species due to their smaller population sizes, and the converse in smaller (particularly unicellular) species. Biological factors also influence which genes in a genome lose or accumulate introns.", "title": "Biological functions and evolution" }, { "paragraph_id": 24, "text": "Alternative splicing of exons within a gene after intron excision acts to introduce greater variability of protein sequences translated from a single gene, allowing multiple related proteins to be generated from a single gene and a single precursor mRNA transcript. The control of alternative RNA splicing is performed by a complex network of signaling molecules that respond to a wide range of intracellular and extracellular signals.", "title": "Biological functions and evolution" }, { "paragraph_id": 25, "text": "Introns contain several short sequences that are important for efficient splicing, such as acceptor and donor sites at either end of the intron as well as a branch point site, which are required for proper splicing by the spliceosome. Some introns are known to enhance the expression of the gene that they are contained in by a process known as intron-mediated enhancement (IME).", "title": "Biological functions and evolution" }, { "paragraph_id": 26, "text": "Actively transcribed regions of DNA frequently form R-loops that are vulnerable to DNA damage. In highly expressed yeast genes, introns inhibit R-loop formation and the occurrence of DNA damage. Genome-wide analysis in both yeast and humans revealed that intron-containing genes have decreased R-loop levels and decreased DNA damage compared to intronless genes of similar expression. Insertion of an intron within an R-loop prone gene can also suppress R-loop formation and recombination. Bonnet et al. (2017) speculated that the function of introns in maintaining genetic stability may explain their evolutionary maintenance at certain locations, particularly in highly expressed genes.", "title": "Biological functions and evolution" }, { "paragraph_id": 27, "text": "The physical presence of introns promotes cellular resistance to starvation via intron enhanced repression of ribosomal protein genes of nutrient-sensing pathways.", "title": "Biological functions and evolution" }, { "paragraph_id": 28, "text": "Introns may be lost or gained over evolutionary time, as shown by many comparative studies of orthologous genes. Subsequent analyses have identified thousands of examples of intron loss and gain events, and it has been proposed that the emergence of eukaryotes, or the initial stages of eukaryotic evolution, involved an intron invasion. Two definitive mechanisms of intron loss, reverse transcriptase-mediated intron loss (RTMIL) and genomic deletions, have been identified, and are known to occur. The definitive mechanisms of intron gain, however, remain elusive and controversial. At least seven mechanisms of intron gain have been reported thus far: intron transposition, transposon insertion, tandem genomic duplication, intron transfer, intron gain during double-strand break repair (DSBR), insertion of a group II intron, and intronization. In theory it should be easiest to deduce the origin of recently gained introns due to the lack of host-induced mutations, yet even introns gained recently did not arise from any of the aforementioned mechanisms. These findings thus raise the question of whether or not the proposed mechanisms of intron gain fail to describe the mechanistic origin of many novel introns because they are not accurate mechanisms of intron gain, or if there are other, yet to be discovered, processes generating novel introns.", "title": "As mobile genetic elements" }, { "paragraph_id": 29, "text": "In intron transposition, the most commonly purported intron gain mechanism, a spliced intron is thought to reverse splice into either its own mRNA or another mRNA at a previously intron-less position. This intron-containing mRNA is then reverse transcribed and the resulting intron-containing cDNA may then cause intron gain via complete or partial recombination with its original genomic locus. Transposon insertions can also result in intron creation. Such an insertion could intronize the transposon without disrupting the coding sequence when a transposon inserts into the sequence AGGT, resulting in the duplication of this sequence on each side of the transposon. It is not yet understood why these elements are spliced, whether by chance, or by some preferential action by the transposon. In tandem genomic duplication, due to the similarity between consensus donor and acceptor splice sites, which both closely resemble AGGT, the tandem genomic duplication of an exonic segment harboring an AGGT sequence generates two potential splice sites. When recognized by the spliceosome, the sequence between the original and duplicated AGGT will be spliced, resulting in the creation of an intron without alteration of the coding sequence of the gene. Double-stranded break repair via non-homologous end joining was recently identified as a source of intron gain when researchers identified short direct repeats flanking 43% of gained introns in Daphnia. These numbers must be compared to the number of conserved introns flanked by repeats in other organisms, though, for statistical relevance. For group II intron insertion, the retrohoming of a group II intron into a nuclear gene was proposed to cause recent spliceosomal intron gain.", "title": "As mobile genetic elements" }, { "paragraph_id": 30, "text": "Intron transfer has been hypothesized to result in intron gain when a paralog or pseudogene gains an intron and then transfers this intron via recombination to an intron-absent location in its sister paralog. Intronization is the process by which mutations create novel introns from formerly exonic sequence. Thus, unlike other proposed mechanisms of intron gain, this mechanism does not require the insertion or generation of DNA to create a novel intron.", "title": "As mobile genetic elements" }, { "paragraph_id": 31, "text": "The only hypothesized mechanism of recent intron gain lacking any direct evidence is that of group II intron insertion, which when demonstrated in vivo, abolishes gene expression. Group II introns are therefore likely the presumed ancestors of spliceosomal introns, acting as site-specific retroelements, and are no longer responsible for intron gain. Tandem genomic duplication is the only proposed mechanism with supporting in vivo experimental evidence: a short intragenic tandem duplication can insert a novel intron into a protein-coding gene, leaving the corresponding peptide sequence unchanged. This mechanism also has extensive indirect evidence lending support to the idea that tandem genomic duplication is a prevalent mechanism for intron gain. The testing of other proposed mechanisms in vivo, particularly intron gain during DSBR, intron transfer, and intronization, is possible, although these mechanisms must be demonstrated in vivo to solidify them as actual mechanisms of intron gain. Further genomic analyses, especially when executed at the population level, may then quantify the relative contribution of each mechanism, possibly identifying species-specific biases that may shed light on varied rates of intron gain amongst different species.", "title": "As mobile genetic elements" }, { "paragraph_id": 32, "text": "Structure:", "title": "See also" }, { "paragraph_id": 33, "text": "Splicing:", "title": "See also" }, { "paragraph_id": 34, "text": "Function", "title": "See also" }, { "paragraph_id": 35, "text": "Others:", "title": "See also" }, { "paragraph_id": 36, "text": "", "title": "External links" } ]
An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intragenic region, i.e., a region inside a gene. The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts. The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons. Introns are found in the genes of most organisms and many viruses and they can be located in protein-coding genes, genes that function as RNA, and some pseudogenes. There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns. Introns are rare in Bacteria and Archaea (prokaryotes), but most eukaryotic genes contain multiple spliceosomal introns.
2001-12-09T18:50:52Z
2023-12-27T15:26:08Z
[ "Template:See also", "Template:Reflist", "Template:Cite book", "Template:Cite web", "Template:Wiktionary", "Template:ISBN", "Template:Short description", "Template:About", "Template:Cite journal", "Template:Post transcriptional modification", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Intron
15,345
IEE
IEE may stand for:
[ { "paragraph_id": 0, "text": "IEE may stand for:", "title": "" } ]
IEE may stand for: Industrial Electronic Engineers, an aerospace display manufacturer Initial Environmental Evaluation, a preliminary environmental impact assessment Institute for Energy Efficiency, a research institute at the University of California, Santa Barbara Institute for Energy & Environment, at New Mexico State University Institution of Electrical Engineers, a British professional organisation now part of the Institution of Engineering and Technology Instituto de Estudos Empresariais, a Brazilian non-profit Intuitive Ethical Extrovert, in socionics Intelligent Energy Europe, CIP Operational programme
2023-05-21T22:59:55Z
[ "Template:Disambiguation", "Template:Canned search", "Template:Srt" ]
https://en.wikipedia.org/wiki/IEE
15,346
Institute of National Remembrance
The Institute of National Remembrance – Commission for the Prosecution of Crimes against the Polish Nation (Polish: Instytut Pamięci Narodowej – Komisja Ścigania Zbrodni przeciwko Narodowi Polskiemu, abbreviated IPN) is a Polish state research institute in charge of education and archives which also includes two public prosecution service components exercising investigative, prosecution and lustration powers. The IPN was established by the Polish parliament by the Act on the Institute of National Remembrance of 18 December 1998 through reforming and expanding the earlier Main Commission for the Investigation of Crimes against the Polish Nation of 1991, which itself had replaced a body on Nazi crimes established in 1945. In 2018, IPN's mission statement was amended by the controversial Amendment to the Act on the Institute of National Remembrance to include "protecting the reputation of the Republic of Poland and the Polish Nation". The IPN investigates and prosecutes Nazi and Communist crimes committed between 1917 and 1990, documents its findings, and disseminates them to the public. Some scholars have criticized the IPN for politicization, especially under Law and Justice governments. The IPN began its activities on 1 July 2000. The IPN is a founding member of the Platform of European Memory and Conscience. Since 2020, the IPN headquarters have been located at Postępu 18 Street in Warsaw. The IPN has eleven branches in other cities and seven delegation offices. The IPN's main areas of activity, in line with its original mission statement, include researching and documenting the losses which were suffered by the Polish Nation as a result of World War II and during the post-war totalitarian period. The IPN informs about the patriotic traditions of resistance against the occupational forces, and the Polish citizens' fight for sovereignty of the nation, including their efforts in defence of freedom and human dignity in general. According to the IPN, it is its duty to prosecute crimes against peace and humanity, as much as war crimes. Its mission includes the need to compensate for damages which were suffered by the repressed and harmed people at a time when human rights were disobeyed by the state, and educate the public about recent history of Poland. IPN collects, organises and archives all documents about the Polish Communist security apparatus active from 22 July 1944 to 31 December 1989. Following the election of the Law and Justice party, the government formulated in 2016 a new IPN law. The 2016 law stipulated that the IPN should oppose publications of false information that dishonors or harms the Polish nation. It also called for popularizing history as part of "an element of patriotic education". The new law also removed the influence of academia and the judiciary on the IPN. A 2018 amendment to the law, added article 55a that attempts to defend the "good name" of Poland. Initially conceived as a criminal offense (3 years of jail) with an exemption for arts and research, following an international outcry, the article was modified to a civil offense that may be tried in civil courts and the exemption was deleted. Defamation charges under the act may be made by the IPN as well as by accredited NGOs such as the Polish League Against Defamation. By the same law, the institution's mission statement was changed to include "protecting the reputation of the Republic of Poland and the Polish Nation". IPN was created by special legislation on 18 December 1998. The IPN is divided into: On 29 April 2010, acting president Bronislaw Komorowski signed into law a parliamentary act that reformed the Institute of National Remembrance. IPN is governed by the director, who has a sovereign position that is independent of the Polish state hierarchy. The director may not be dismissed during his term unless he commits a harmful act. Prior to 2016, the election of the director was a complex procedure, which involves the selection of a panel of candidates by the IPN Collegium (members appointed by the Polish Parliament and judiciary). The Polish Parliament (Sejm) then elects one of the candidates, with a required supermajority (60%). The director has a 5-year term of office. Following 2016 legislation in the PiS controlled parliament, the former pluralist Collegium was replaced with a nine-member Collegium composed of PiS supporters, and the Sejm appoints the director after consulting with the College without an election between candidates. The first director of the IPN was Leon Kieres, elected by the Sejm for five years on 8 June 2000 (term 30 June 2000 – 29 December 2005). The IPN granted some 6,500 people the "victim of communism" status and gathered significant archive material. The IPN faced difficulties since it was new and also since the Democratic Left Alliance (containing former communists) attempted to close the IPN. The publication of Neighbors: The Destruction of the Jewish Community in Jedwabne, Poland by Jan T. Gross, proved to be a lifeline for the IPN as Polish president Aleksander Kwaśniewski intervened to save the IPN since he deemed the IPN's research to be important as part of Jewish-Polish reconciliation and "apology diplomacy". The second director was Janusz Kurtyka, elected on 9 December 2005 with a term that started 29 December 2005 until his death in the Smolensk airplane crash on 10 April 2010. The elections were controversial, as during the elections a leak against Andrzej Przewoźnik accusing him of collaboration with Służba Bezpieczeństwa caused him to withdraw his candidacy. Przewoźnik was cleared of the accusations only after he had lost the election. In 2006, the IPN opened a "Lustration Bureau" that increased the director's power. The bureau was assigned the task of examining the past of all candidates to public office. Kurtyka widened archive access to the public and shifted focus from compensating victims to researching collaboration. Franciszek Gryciuk In 1999, historian Franciszek Gryciuk was appointed to the Collegium of the IPN, which he chaired 2003–2004. From June 2008 to June 2011, he was vice president of the IPN. He was acting director 2010–2011, between the death of the IPN's second president, Janusz Kurtyka, in the 2010 Polish Air Force Tu-154 crash and the election of Łukasz Kamiński by the Polish Parliament as the third director. Łukasz Kamiński, was elected by the Sejm in 2011 following the death of his predecessor. Kamiński headed the Wroclaw Regional Bureau of Public Education prior to his election. During his term, the IPN faced a wide array of criticism calling for an overhaul or even replacement. Critics founds fault in the IPN being a state institution, the lack of historical knowledge of its prosecutors, a relatively high number of microhistories with a debatable methodology, overuse of the martyrology motif, research methodology, and isolationism from the wider research community. In response, Kamiński implemented several changes, including organizing public debates with outside historians to counter the charge of isolationism and has suggested refocusing on victims as opposed to agents. On 22 July 2016 Jarosław Szarek was appointed to head IPN. He dismissed Krzysztof Persak, co-author of the 2002 two-volume IPN study on the Jedwabne pogrom. In subsequent months, IPN featured in media headlines for releasing controversial documents, including some relating to Lech Wałęsa, for memory politics conducted in schools, for efforts to change Communist street names, and for legislation efforts. According to historian Idesbald Goddeeris, this marks a return of politics to the IPN. On 23 July 2021 Karol Nawrocki was appointed to head IPN. Two components of the IPN are specialized parts of the Public Prosecution Service of Poland, namely the Main Commission for the Prosecution of Crimes against the Polish Nation and the Lustration Bureau. Each of these two components exercises its activities autonomically from other components of the Institute and is headed by a director who is ex officio Deputy Public Prosecutor General of Poland, while role of the IPN Director is in their case purely accessory and includes no powers regarding conducted investigations, being limited only to providing supporting apparatus and, when vacated, presenting candidates for the offices of the two directors to the Prosecutor General who as their superior has the discretionary power to appoint or reject them. The Main Commission for the Prosecution of Crimes against the Polish Nation (Główna Komisja Ścigania Zbrodni Przeciwko Narodowi Polskiemu) is the oldest component of the IPN tracing its origins to 1945. It investigates and prosecutes crimes committed on Polish soil against Polish citizens as well as people of other citizenships wronged in the country. War crimes which are not affected by statute of limitations according to Polish law include: On 15 March 2007, an amendment to the Polish law regulating the IPN (enacted on 18 December 2006) came into effect. The change gave the IPN new lustration powers and expanded IPN's file access. The change was enacted by Law and Justice government in a series of legislative amendments during 2006 and the beginning of 2007. However, several articles of the 2006-7 amendments were held unconstitutional by Poland's Constitutional Court on 11 May 2007, though the IPN's lustration power was still wider than under the original 1997 law. These powers include loss of position for those who submitted false lustration declarations as well as a lustration process of candidates for senior office. The research conducted by IPN from December 2000 falls into four main topic areas: The IPN's Public Education Office (BEP) vaguely defined role in the IPN act is to inform society of Communist and Nazi crimes and institutions. This vaguely defined role allowed Paweł Machcewicz, BEP's director in 2000, freedom to create a wide range of activities. Researchers at the IPN conduct not only research but are required to take part in public outreach. BEP has published music CDs, DVDs, and serials. It has founded "historical clubs" for debates and lectures. It has also organized outdoor historical fairs, picnic, and games. The IPN Bulletin [pl] (Polish: Biuletyn IPN) is a high circulation popular-scientific journal, intended for lay readers and youth. Some 12,000 of 15,000 copies of the Bulletin are distributed free of charge to secondary schools in Poland, and the rest are sold in bookstores. The Bulletin contains: popular-scientific and academic articles, polemics, manifestos, appeals to readers, promotional material on the IPN and BEP, denials and commentary on reports in the news, as well as multimedia supplements. The IPN also publishes the Remembrance and Justice [pl] (Polish: Pamięć i Sprawiedliwość) scientific journal. The IPN has issued several board games to help educate people about recent Polish history, including: In 2008, the chairman of the IPN wrote to local administrations, calling for the addition of the word "German" before "Nazi" to all monuments and tablets commemorating Germany's victims, stating that "Nazis" is not always understood to relate specifically to Germans. Several scenes of atrocities conducted by Germany were duly updated with commemorative plaques clearly indicating the nationality of the perpetrators. The IPN also requested better documentation and commemoration of crimes that had been perpetrated by the Soviet Union. The Polish government also asked UNESCO to officially change the name "Auschwitz Concentration Camp" to "Former Nazi German Concentration Camp Auschwitz-Birkenau", to clarify that the camp had been built and operated by Nazi Germany. In 2007, UNESCO's World Heritage Committee changed the camp's name to "Auschwitz Birkenau German Nazi Concentration and Extermination Camp (1940–1945)." Previously some German media, including Der Spiegel, had called the camp "Polish". Since 2019, the Institute publishes the Institute of National Remembrance Review (ISSN 2658-1566), a yearly peer-reviewed academic journal in English, with Anna Karolina Piekarska as editor-in-chief.. According to Georges Mink [fr], common criticisms of the IPN include its dominance in the Polish research field, which is guaranteed by a budget that far supersedes that of any similar academic institution; the "thematic monotony ... of micro-historical studies ... of no real scientific interest" of its research; its focus on "martyrology"; and various criticisms of methodology and ethics. Some of these criticisms have been addressed by Director Łukasz Kamiński during his tenure and who according to Mink "has made significant changes"; however, Minsk, writing in 2017, was also concerned with the recent administrative and personnel changes in IPN, including the election of Jarosław Szarek as director, which he posits are likely to result in further the politicization of the IPN. According to Valentin Behr, IPN research into the Communist era is valuable, positing that "the resources at its disposal have made it unrivalled as a research centre in the academic world"; at the same time, he said that the research is mostly focused on the era's negative aspects, and that it "is far from producing a critical approach to history, one that asks its own questions and is methodologically pluralistic." He added that in recent years that problem is being ameliorated as the IPN's work "has somewhat diversified as its administration has taken note of criticism on the part of academics." According to Robert Traba, "under the ... IPN, tasks related to the 'national politics of memory' were – unfortunately – merged with the mission of independent academic research. In the public mind, there could be only one message flowing from the institute's name: memory and history as a science are one. The problem is that nothing could be further from the truth, and nothing could be more misleading. What the IPN’s message presents, in fact, is the danger that Polish history will be grossly over-simplified." Traba states that "at the heart of debate today is a confrontation between those who support traditional methods and categories of research, and those who support newly defined methods and categories. ... Broadening the research perspective means the enrichment of the historian's instrumentarium.'" He puts the IPN research, in a broad sense, in the former; he states that "[a] solid, workshop-oriented, traditional, and positivist historiography ... which defends itself by the integrity of its analysis and its diversified source base" but criticizes its approach for leading to a "falsely conceived mission to find 'objective truth' at the expense of 'serious study of event history', and a 'simplified claim that only 'secret' sources, not accessible to ordinary mortals', can lead to that objective truth." Traba quotes historian Wiktoria Śliwowska, who wrote: "The historian must strive not only to reconstruct a given reality, but also to understand the background of events, the circumstances in which people acted. It is easy to condemn, but difficult to understand a complicated past. ... [Meanwhile, in the IPN] thick volumes are being produced, into which are being thrown, with no real consideration, further evidence in criminating various persons now deceased (and therefore not able to defend themselves), and elderly people still alive – known and unknown." Traba posits that "there is ... a need for genuine debate that does not revolve around [the files] in the IPN archives, 'lustration,' or short-term and politically inspired discussions designed to establish the 'only real' truth", and suggests that adopting varied perspectives and diverse methodologies might contribute to such debate. During PiS's control of the government between 2005 and 2007, the IPN was the focus of heated public controversies, in particular in regard to the pasts of Solidarity leader Lech Wałęsa and PZPR secretary Wojciech Jaruzelski. As a result, the IPN has been referred to as "a political institution at the centre of 'memory games'". In 2008, two IPN employees, Sławomir Cenckiewicz and Piotr Gontarczyk, published a book, SB a Lech Wałęsa [pl] (The Security Service and Lech Wałęsa: A Contribution to a Biography) which caused a major controversy. The book's premise was that in the 1970s the Solidarity leader and later president of Poland Lech Wałęsa was a secret informant of the Polish Communist Security Service. In 2018, the IPN hired Tomasz Greniuch, a historian who in his youth was associated with a far-right group. When he was promoted to a regional director of the Wrocław branch in February 2021, his past came to media attention and resulted in criticism of Greniuch and IPN. Greniuch issued an apology for his past behavior and resigned within weeks. Valentin Behr writes that the IPN is most "concerned with the production of an official narrative about Poland's recent past" and therefore lacks innovation in its research, while noting that situation is being remedied under recent leadership. He writes that the IPN "has mainly taken in historians from the fringes of the academic field" who were either unable to obtain a prominent academic position or ideologically drawn to the IPN's approach, and that "in the academic field, being an 'IPN historian' can be a stigma"; Behr explains this by pointing to a generational divide in Polish academia, visible when comparing IPN to other Polish research outlets, and claims: "Hiring young historians was done deliberately to give the IPN greater autonomy from the academic world, considered as too leftist to describe the dark sides of the communist regime." He says that the IPN has created opportunities for many history specialists who can carry dedicated research there without the need for an appointment at another institution, and for training young historians, noting that "the IPN is now the leading employer of young PhD students and PhDs in history specialized in contemporary history, ahead of Polish universities". Historian Dariusz Stola states that the IPN is very bureaucratic in nature, comparing it to a "regular continental European bureaucracy, with usual deficiencies of its kind", and posits that in this aspect the IPN resembles the former Communist institutions it is supposed to deal with, equally "bureaucratic, centralist, heavy, inclined to extensive growth and quantity rather than quality of production". An incident which caused controversy involved the "Wildstein list", a partial list of persons who allegedly worked for the communist-era Polish intelligence service, copied in 2004 from IPN archives (without IPN permission) by journalist Bronisław Wildstein and published on the Internet in 2005. The list gained much attention in Polish media and politics, and IPN security procedures and handling of the matter came under criticism.
[ { "paragraph_id": 0, "text": "The Institute of National Remembrance – Commission for the Prosecution of Crimes against the Polish Nation (Polish: Instytut Pamięci Narodowej – Komisja Ścigania Zbrodni przeciwko Narodowi Polskiemu, abbreviated IPN) is a Polish state research institute in charge of education and archives which also includes two public prosecution service components exercising investigative, prosecution and lustration powers. The IPN was established by the Polish parliament by the Act on the Institute of National Remembrance of 18 December 1998 through reforming and expanding the earlier Main Commission for the Investigation of Crimes against the Polish Nation of 1991, which itself had replaced a body on Nazi crimes established in 1945.", "title": "" }, { "paragraph_id": 1, "text": "In 2018, IPN's mission statement was amended by the controversial Amendment to the Act on the Institute of National Remembrance to include \"protecting the reputation of the Republic of Poland and the Polish Nation\". The IPN investigates and prosecutes Nazi and Communist crimes committed between 1917 and 1990, documents its findings, and disseminates them to the public. Some scholars have criticized the IPN for politicization, especially under Law and Justice governments.", "title": "" }, { "paragraph_id": 2, "text": "The IPN began its activities on 1 July 2000. The IPN is a founding member of the Platform of European Memory and Conscience. Since 2020, the IPN headquarters have been located at Postępu 18 Street in Warsaw. The IPN has eleven branches in other cities and seven delegation offices.", "title": "" }, { "paragraph_id": 3, "text": "The IPN's main areas of activity, in line with its original mission statement, include researching and documenting the losses which were suffered by the Polish Nation as a result of World War II and during the post-war totalitarian period. The IPN informs about the patriotic traditions of resistance against the occupational forces, and the Polish citizens' fight for sovereignty of the nation, including their efforts in defence of freedom and human dignity in general.", "title": "Purpose" }, { "paragraph_id": 4, "text": "According to the IPN, it is its duty to prosecute crimes against peace and humanity, as much as war crimes. Its mission includes the need to compensate for damages which were suffered by the repressed and harmed people at a time when human rights were disobeyed by the state, and educate the public about recent history of Poland. IPN collects, organises and archives all documents about the Polish Communist security apparatus active from 22 July 1944 to 31 December 1989.", "title": "Purpose" }, { "paragraph_id": 5, "text": "Following the election of the Law and Justice party, the government formulated in 2016 a new IPN law. The 2016 law stipulated that the IPN should oppose publications of false information that dishonors or harms the Polish nation. It also called for popularizing history as part of \"an element of patriotic education\". The new law also removed the influence of academia and the judiciary on the IPN.", "title": "Purpose" }, { "paragraph_id": 6, "text": "A 2018 amendment to the law, added article 55a that attempts to defend the \"good name\" of Poland. Initially conceived as a criminal offense (3 years of jail) with an exemption for arts and research, following an international outcry, the article was modified to a civil offense that may be tried in civil courts and the exemption was deleted. Defamation charges under the act may be made by the IPN as well as by accredited NGOs such as the Polish League Against Defamation. By the same law, the institution's mission statement was changed to include \"protecting the reputation of the Republic of Poland and the Polish Nation\".", "title": "Purpose" }, { "paragraph_id": 7, "text": "IPN was created by special legislation on 18 December 1998. The IPN is divided into:", "title": "Organisation" }, { "paragraph_id": 8, "text": "On 29 April 2010, acting president Bronislaw Komorowski signed into law a parliamentary act that reformed the Institute of National Remembrance.", "title": "Organisation" }, { "paragraph_id": 9, "text": "IPN is governed by the director, who has a sovereign position that is independent of the Polish state hierarchy. The director may not be dismissed during his term unless he commits a harmful act. Prior to 2016, the election of the director was a complex procedure, which involves the selection of a panel of candidates by the IPN Collegium (members appointed by the Polish Parliament and judiciary). The Polish Parliament (Sejm) then elects one of the candidates, with a required supermajority (60%). The director has a 5-year term of office. Following 2016 legislation in the PiS controlled parliament, the former pluralist Collegium was replaced with a nine-member Collegium composed of PiS supporters, and the Sejm appoints the director after consulting with the College without an election between candidates.", "title": "Organisation" }, { "paragraph_id": 10, "text": "The first director of the IPN was Leon Kieres, elected by the Sejm for five years on 8 June 2000 (term 30 June 2000 – 29 December 2005). The IPN granted some 6,500 people the \"victim of communism\" status and gathered significant archive material. The IPN faced difficulties since it was new and also since the Democratic Left Alliance (containing former communists) attempted to close the IPN. The publication of Neighbors: The Destruction of the Jewish Community in Jedwabne, Poland by Jan T. Gross, proved to be a lifeline for the IPN as Polish president Aleksander Kwaśniewski intervened to save the IPN since he deemed the IPN's research to be important as part of Jewish-Polish reconciliation and \"apology diplomacy\".", "title": "Organisation" }, { "paragraph_id": 11, "text": "The second director was Janusz Kurtyka, elected on 9 December 2005 with a term that started 29 December 2005 until his death in the Smolensk airplane crash on 10 April 2010. The elections were controversial, as during the elections a leak against Andrzej Przewoźnik accusing him of collaboration with Służba Bezpieczeństwa caused him to withdraw his candidacy. Przewoźnik was cleared of the accusations only after he had lost the election.", "title": "Organisation" }, { "paragraph_id": 12, "text": "In 2006, the IPN opened a \"Lustration Bureau\" that increased the director's power. The bureau was assigned the task of examining the past of all candidates to public office. Kurtyka widened archive access to the public and shifted focus from compensating victims to researching collaboration.", "title": "Organisation" }, { "paragraph_id": 13, "text": "Franciszek Gryciuk", "title": "Organisation" }, { "paragraph_id": 14, "text": "In 1999, historian Franciszek Gryciuk was appointed to the Collegium of the IPN, which he chaired 2003–2004. From June 2008 to June 2011, he was vice president of the IPN. He was acting director 2010–2011, between the death of the IPN's second president, Janusz Kurtyka, in the 2010 Polish Air Force Tu-154 crash and the election of Łukasz Kamiński by the Polish Parliament as the third director.", "title": "Organisation" }, { "paragraph_id": 15, "text": "Łukasz Kamiński, was elected by the Sejm in 2011 following the death of his predecessor. Kamiński headed the Wroclaw Regional Bureau of Public Education prior to his election. During his term, the IPN faced a wide array of criticism calling for an overhaul or even replacement. Critics founds fault in the IPN being a state institution, the lack of historical knowledge of its prosecutors, a relatively high number of microhistories with a debatable methodology, overuse of the martyrology motif, research methodology, and isolationism from the wider research community. In response, Kamiński implemented several changes, including organizing public debates with outside historians to counter the charge of isolationism and has suggested refocusing on victims as opposed to agents.", "title": "Organisation" }, { "paragraph_id": 16, "text": "On 22 July 2016 Jarosław Szarek was appointed to head IPN. He dismissed Krzysztof Persak, co-author of the 2002 two-volume IPN study on the Jedwabne pogrom. In subsequent months, IPN featured in media headlines for releasing controversial documents, including some relating to Lech Wałęsa, for memory politics conducted in schools, for efforts to change Communist street names, and for legislation efforts. According to historian Idesbald Goddeeris, this marks a return of politics to the IPN.", "title": "Organisation" }, { "paragraph_id": 17, "text": "On 23 July 2021 Karol Nawrocki was appointed to head IPN.", "title": "Organisation" }, { "paragraph_id": 18, "text": "Two components of the IPN are specialized parts of the Public Prosecution Service of Poland, namely the Main Commission for the Prosecution of Crimes against the Polish Nation and the Lustration Bureau. Each of these two components exercises its activities autonomically from other components of the Institute and is headed by a director who is ex officio Deputy Public Prosecutor General of Poland, while role of the IPN Director is in their case purely accessory and includes no powers regarding conducted investigations, being limited only to providing supporting apparatus and, when vacated, presenting candidates for the offices of the two directors to the Prosecutor General who as their superior has the discretionary power to appoint or reject them.", "title": "Public prosecutors in the IPN" }, { "paragraph_id": 19, "text": "The Main Commission for the Prosecution of Crimes against the Polish Nation (Główna Komisja Ścigania Zbrodni Przeciwko Narodowi Polskiemu) is the oldest component of the IPN tracing its origins to 1945. It investigates and prosecutes crimes committed on Polish soil against Polish citizens as well as people of other citizenships wronged in the country. War crimes which are not affected by statute of limitations according to Polish law include:", "title": "Public prosecutors in the IPN" }, { "paragraph_id": 20, "text": "On 15 March 2007, an amendment to the Polish law regulating the IPN (enacted on 18 December 2006) came into effect. The change gave the IPN new lustration powers and expanded IPN's file access. The change was enacted by Law and Justice government in a series of legislative amendments during 2006 and the beginning of 2007. However, several articles of the 2006-7 amendments were held unconstitutional by Poland's Constitutional Court on 11 May 2007, though the IPN's lustration power was still wider than under the original 1997 law. These powers include loss of position for those who submitted false lustration declarations as well as a lustration process of candidates for senior office.", "title": "Public prosecutors in the IPN" }, { "paragraph_id": 21, "text": "The research conducted by IPN from December 2000 falls into four main topic areas:", "title": "Other activities" }, { "paragraph_id": 22, "text": "The IPN's Public Education Office (BEP) vaguely defined role in the IPN act is to inform society of Communist and Nazi crimes and institutions. This vaguely defined role allowed Paweł Machcewicz, BEP's director in 2000, freedom to create a wide range of activities.", "title": "Other activities" }, { "paragraph_id": 23, "text": "Researchers at the IPN conduct not only research but are required to take part in public outreach. BEP has published music CDs, DVDs, and serials. It has founded \"historical clubs\" for debates and lectures. It has also organized outdoor historical fairs, picnic, and games.", "title": "Other activities" }, { "paragraph_id": 24, "text": "The IPN Bulletin [pl] (Polish: Biuletyn IPN) is a high circulation popular-scientific journal, intended for lay readers and youth. Some 12,000 of 15,000 copies of the Bulletin are distributed free of charge to secondary schools in Poland, and the rest are sold in bookstores. The Bulletin contains: popular-scientific and academic articles, polemics, manifestos, appeals to readers, promotional material on the IPN and BEP, denials and commentary on reports in the news, as well as multimedia supplements.", "title": "Other activities" }, { "paragraph_id": 25, "text": "The IPN also publishes the Remembrance and Justice [pl] (Polish: Pamięć i Sprawiedliwość) scientific journal.", "title": "Other activities" }, { "paragraph_id": 26, "text": "The IPN has issued several board games to help educate people about recent Polish history, including:", "title": "Other activities" }, { "paragraph_id": 27, "text": "In 2008, the chairman of the IPN wrote to local administrations, calling for the addition of the word \"German\" before \"Nazi\" to all monuments and tablets commemorating Germany's victims, stating that \"Nazis\" is not always understood to relate specifically to Germans. Several scenes of atrocities conducted by Germany were duly updated with commemorative plaques clearly indicating the nationality of the perpetrators. The IPN also requested better documentation and commemoration of crimes that had been perpetrated by the Soviet Union.", "title": "Other activities" }, { "paragraph_id": 28, "text": "The Polish government also asked UNESCO to officially change the name \"Auschwitz Concentration Camp\" to \"Former Nazi German Concentration Camp Auschwitz-Birkenau\", to clarify that the camp had been built and operated by Nazi Germany. In 2007, UNESCO's World Heritage Committee changed the camp's name to \"Auschwitz Birkenau German Nazi Concentration and Extermination Camp (1940–1945).\" Previously some German media, including Der Spiegel, had called the camp \"Polish\".", "title": "Other activities" }, { "paragraph_id": 29, "text": "Since 2019, the Institute publishes the Institute of National Remembrance Review (ISSN 2658-1566), a yearly peer-reviewed academic journal in English, with Anna Karolina Piekarska as editor-in-chief..", "title": "Other activities" }, { "paragraph_id": 30, "text": "According to Georges Mink [fr], common criticisms of the IPN include its dominance in the Polish research field, which is guaranteed by a budget that far supersedes that of any similar academic institution; the \"thematic monotony ... of micro-historical studies ... of no real scientific interest\" of its research; its focus on \"martyrology\"; and various criticisms of methodology and ethics. Some of these criticisms have been addressed by Director Łukasz Kamiński during his tenure and who according to Mink \"has made significant changes\"; however, Minsk, writing in 2017, was also concerned with the recent administrative and personnel changes in IPN, including the election of Jarosław Szarek as director, which he posits are likely to result in further the politicization of the IPN. According to Valentin Behr, IPN research into the Communist era is valuable, positing that \"the resources at its disposal have made it unrivalled as a research centre in the academic world\"; at the same time, he said that the research is mostly focused on the era's negative aspects, and that it \"is far from producing a critical approach to history, one that asks its own questions and is methodologically pluralistic.\" He added that in recent years that problem is being ameliorated as the IPN's work \"has somewhat diversified as its administration has taken note of criticism on the part of academics.\"", "title": "Criticism" }, { "paragraph_id": 31, "text": "According to Robert Traba, \"under the ... IPN, tasks related to the 'national politics of memory' were – unfortunately – merged with the mission of independent academic research. In the public mind, there could be only one message flowing from the institute's name: memory and history as a science are one. The problem is that nothing could be further from the truth, and nothing could be more misleading. What the IPN’s message presents, in fact, is the danger that Polish history will be grossly over-simplified.\" Traba states that \"at the heart of debate today is a confrontation between those who support traditional methods and categories of research, and those who support newly defined methods and categories. ... Broadening the research perspective means the enrichment of the historian's instrumentarium.'\" He puts the IPN research, in a broad sense, in the former; he states that \"[a] solid, workshop-oriented, traditional, and positivist historiography ... which defends itself by the integrity of its analysis and its diversified source base\" but criticizes its approach for leading to a \"falsely conceived mission to find 'objective truth' at the expense of 'serious study of event history', and a 'simplified claim that only 'secret' sources, not accessible to ordinary mortals', can lead to that objective truth.\" Traba quotes historian Wiktoria Śliwowska, who wrote: \"The historian must strive not only to reconstruct a given reality, but also to understand the background of events, the circumstances in which people acted. It is easy to condemn, but difficult to understand a complicated past. ... [Meanwhile, in the IPN] thick volumes are being produced, into which are being thrown, with no real consideration, further evidence in criminating various persons now deceased (and therefore not able to defend themselves), and elderly people still alive – known and unknown.\" Traba posits that \"there is ... a need for genuine debate that does not revolve around [the files] in the IPN archives, 'lustration,' or short-term and politically inspired discussions designed to establish the 'only real' truth\", and suggests that adopting varied perspectives and diverse methodologies might contribute to such debate.", "title": "Criticism" }, { "paragraph_id": 32, "text": "During PiS's control of the government between 2005 and 2007, the IPN was the focus of heated public controversies, in particular in regard to the pasts of Solidarity leader Lech Wałęsa and PZPR secretary Wojciech Jaruzelski. As a result, the IPN has been referred to as \"a political institution at the centre of 'memory games'\".", "title": "Criticism" }, { "paragraph_id": 33, "text": "In 2008, two IPN employees, Sławomir Cenckiewicz and Piotr Gontarczyk, published a book, SB a Lech Wałęsa [pl] (The Security Service and Lech Wałęsa: A Contribution to a Biography) which caused a major controversy. The book's premise was that in the 1970s the Solidarity leader and later president of Poland Lech Wałęsa was a secret informant of the Polish Communist Security Service.", "title": "Criticism" }, { "paragraph_id": 34, "text": "In 2018, the IPN hired Tomasz Greniuch, a historian who in his youth was associated with a far-right group. When he was promoted to a regional director of the Wrocław branch in February 2021, his past came to media attention and resulted in criticism of Greniuch and IPN. Greniuch issued an apology for his past behavior and resigned within weeks.", "title": "Criticism" }, { "paragraph_id": 35, "text": "Valentin Behr writes that the IPN is most \"concerned with the production of an official narrative about Poland's recent past\" and therefore lacks innovation in its research, while noting that situation is being remedied under recent leadership. He writes that the IPN \"has mainly taken in historians from the fringes of the academic field\" who were either unable to obtain a prominent academic position or ideologically drawn to the IPN's approach, and that \"in the academic field, being an 'IPN historian' can be a stigma\"; Behr explains this by pointing to a generational divide in Polish academia, visible when comparing IPN to other Polish research outlets, and claims: \"Hiring young historians was done deliberately to give the IPN greater autonomy from the academic world, considered as too leftist to describe the dark sides of the communist regime.\" He says that the IPN has created opportunities for many history specialists who can carry dedicated research there without the need for an appointment at another institution, and for training young historians, noting that \"the IPN is now the leading employer of young PhD students and PhDs in history specialized in contemporary history, ahead of Polish universities\".", "title": "Criticism" }, { "paragraph_id": 36, "text": "Historian Dariusz Stola states that the IPN is very bureaucratic in nature, comparing it to a \"regular continental European bureaucracy, with usual deficiencies of its kind\", and posits that in this aspect the IPN resembles the former Communist institutions it is supposed to deal with, equally \"bureaucratic, centralist, heavy, inclined to extensive growth and quantity rather than quality of production\".", "title": "Criticism" }, { "paragraph_id": 37, "text": "An incident which caused controversy involved the \"Wildstein list\", a partial list of persons who allegedly worked for the communist-era Polish intelligence service, copied in 2004 from IPN archives (without IPN permission) by journalist Bronisław Wildstein and published on the Internet in 2005. The list gained much attention in Polish media and politics, and IPN security procedures and handling of the matter came under criticism.", "title": "Criticism" } ]
The Institute of National Remembrance – Commission for the Prosecution of Crimes against the Polish Nation is a Polish state research institute in charge of education and archives which also includes two public prosecution service components exercising investigative, prosecution and lustration powers. The IPN was established by the Polish parliament by the Act on the Institute of National Remembrance of 18 December 1998 through reforming and expanding the earlier Main Commission for the Investigation of Crimes against the Polish Nation of 1991, which itself had replaced a body on Nazi crimes established in 1945. In 2018, IPN's mission statement was amended by the controversial Amendment to the Act on the Institute of National Remembrance to include "protecting the reputation of the Republic of Poland and the Polish Nation". The IPN investigates and prosecutes Nazi and Communist crimes committed between 1917 and 1990, documents its findings, and disseminates them to the public. Some scholars have criticized the IPN for politicization, especially under Law and Justice governments. The IPN began its activities on 1 July 2000. The IPN is a founding member of the Platform of European Memory and Conscience. Since 2020, the IPN headquarters have been located at Postępu 18 Street in Warsaw. The IPN has eleven branches in other cities and seven delegation offices.
2001-12-10T19:40:42Z
2023-12-30T15:41:12Z
[ "Template:Cite book", "Template:In lang", "Template:Anti-communism in Europe since 1989", "Template:Truth and Reconciliation Commission", "Template:Better source needed", "Template:Ill", "Template:Reflist", "Template:Webarchive", "Template:Cite news", "Template:Cite web", "Template:Cite journal", "Template:Cite conference", "Template:Pp", "Template:Infobox organization", "Template:Lang-pl", "Template:About", "Template:ISSN", "Template:National memory institutions", "Template:R", "Template:Authority control", "Template:Short description", "Template:Use dmy dates", "Template:Further" ]
https://en.wikipedia.org/wiki/Institute_of_National_Remembrance
15,347
Intelligence (disambiguation)
Intelligence is the ability to perceive or infer information, and to retain it. Intelligence may also refer to:
[ { "paragraph_id": 0, "text": "Intelligence is the ability to perceive or infer information, and to retain it.", "title": "" }, { "paragraph_id": 1, "text": "Intelligence may also refer to:", "title": "" } ]
Intelligence is the ability to perceive or infer information, and to retain it. Intelligence may also refer to:
2001-12-10T22:48:03Z
2023-11-19T20:30:35Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Srt", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Intelligence_(disambiguation)
15,352
Identical particles
In quantum mechanics, identical particles (also called indistinguishable or indiscernible particles) are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to, elementary particles (such as electrons), composite subatomic particles (such as atomic nuclei), as well as atoms and molecules. Quasiparticles also behave in this way. Although all known indistinguishable particles only exist at the quantum scale, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics. There are two main categories of identical particles: bosons, which can share quantum states, and fermions, which cannot (as described by the Pauli exclusion principle). Examples of bosons are photons, gluons, phonons, helium-4 nuclei and all mesons. Examples of fermions are electrons, neutrinos, quarks, protons, neutrons, and helium-3 nuclei. The fact that particles can be identical has important consequences in statistical mechanics, where calculations rely on probabilistic arguments, which are sensitive to whether or not the objects being studied are identical. As a result, identical particles exhibit markedly different statistical behaviour from distinguishable particles. For example, the indistinguishability of particles has been proposed as a solution to Gibbs' mixing paradox. There are two methods for distinguishing between particles. The first method relies on differences in the intrinsic physical properties of the particles, such as mass, electric charge, and spin. If differences exist, it is possible to distinguish between the particles by measuring the relevant properties. However, it is an empirical fact that microscopic particles of the same species have completely equivalent physical properties. For instance, every electron in the universe has exactly the same electric charge; this is why it is possible to speak of such a thing as "the charge of the electron". Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as the position of each particle can be measured with infinite precision (even when the particles collide), then there would be no ambiguity about which particle is which. The problem with the second approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements. Instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap. Once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are then said to be indistinguishable. What follows is an example to make the above discussion concrete, using the formalism developed in the article on the mathematical formulation of quantum mechanics. Let n denote a complete set of (discrete) quantum numbers for specifying single-particle states (for example, for the particle in a box problem, take n to be the quantized wave vector of the wavefunction.) For simplicity, consider a system composed of two particles that are not interacting with each other. Suppose that one particle is in the state n1, and the other is in the state n2. The quantum state of the system is denoted by the expression where the order of the tensor product matters ( if | n 2 ⟩ | n 1 ⟩ {\displaystyle |n_{2}\rangle |n_{1}\rangle } , then the particle 1 occupies the state n2 while the particle 2 occupies the state n1). This is the canonical way of constructing a basis for a tensor product space H ⊗ H {\displaystyle H\otimes H} of the combined system from the individual spaces. This expression is valid for distinguishable particles, however, it is not appropriate for indistinguishable particles since | n 1 ⟩ | n 2 ⟩ {\displaystyle |n_{1}\rangle |n_{2}\rangle } and | n 2 ⟩ | n 1 ⟩ {\displaystyle |n_{2}\rangle |n_{1}\rangle } as a result of exchanging the particles are generally different states. Two states are physically equivalent only if they differ at most by a complex phase factor. For two indistinguishable particles, a state before the particle exchange must be physically equivalent to the state after the exchange, so these two states differ at most by a complex phase factor. This fact suggests that a state for two indistinguishable (and non-interacting) particles is given by following two possibilities: States where it is a sum are known as symmetric, while states involving the difference are called antisymmetric. More completely, symmetric states have the form while antisymmetric states have the form Note that if n1 and n2 are the same, the antisymmetric expression gives zero, which cannot be a state vector since it cannot be normalized. In other words, more than one identical particle cannot occupy an antisymmetric state (one antisymmetric state can be occupied only by one particle). This is known as the Pauli exclusion principle, and it is the fundamental reason behind the chemical properties of atoms and the stability of matter. The importance of symmetric and antisymmetric states is ultimately based on empirical evidence. It appears to be a fact of nature that identical particles do not occupy states of a mixed symmetry, such as There is actually an exception to this rule, which will be discussed later. On the other hand, it can be shown that the symmetric and antisymmetric states are in a sense special, by examining a particular symmetry of the multiple-particle states known as exchange symmetry. Define a linear operator P, called the exchange operator. When it acts on a tensor product of two state vectors, it exchanges the values of the state vectors: P is both Hermitian and unitary. Because it is unitary, it can be regarded as a symmetry operator. This symmetry may be described as the symmetry under the exchange of labels attached to the particles (i.e., to the single-particle Hilbert spaces). Clearly, P 2 = 1 {\displaystyle P^{2}=1} (the identity operator), so the eigenvalues of P are +1 and −1. The corresponding eigenvectors are the symmetric and antisymmetric states: In other words, symmetric and antisymmetric states are essentially unchanged under the exchange of particle labels: they are only multiplied by a factor of +1 or −1, rather than being "rotated" somewhere else in the Hilbert space. This indicates that the particle labels have no physical meaning, in agreement with the earlier discussion on indistinguishability. It will be recalled that P is Hermitian. As a result, it can be regarded as an observable of the system, which means that, in principle, a measurement can be performed to find out if a state is symmetric or antisymmetric. Furthermore, the equivalence of the particles indicates that the Hamiltonian can be written in a symmetrical form, such as It is possible to show that such Hamiltonians satisfy the commutation relation According to the Heisenberg equation, this means that the value of P is a constant of motion. If the quantum state is initially symmetric (antisymmetric), it will remain symmetric (antisymmetric) as the system evolves. Mathematically, this says that the state vector is confined to one of the two eigenspaces of P, and is not allowed to range over the entire Hilbert space. Thus, that eigenspace might as well be treated as the actual Hilbert space of the system. This is the idea behind the definition of Fock space. The choice of symmetry or antisymmetry is determined by the species of particle. For example, symmetric states must always be used when describing photons or helium-4 atoms, and antisymmetric states when describing electrons or protons. Particles which exhibit symmetric states are called bosons. The nature of symmetric states has important consequences for the statistical properties of systems composed of many identical bosons. These statistical properties are described as Bose–Einstein statistics. Particles which exhibit antisymmetric states are called fermions. Antisymmetry gives rise to the Pauli exclusion principle, which forbids identical fermions from sharing the same quantum state. Systems of many identical fermions are described by Fermi–Dirac statistics. Parastatistics are also possible. In certain two-dimensional systems, mixed symmetry can occur. These exotic particles are known as anyons, and they obey fractional statistics. Experimental evidence for the existence of anyons exists in the fractional quantum Hall effect, a phenomenon observed in the two-dimensional electron gases that form the inversion layer of MOSFETs. There is another type of statistic, known as braid statistics, which are associated with particles known as plektons. The spin-statistics theorem relates the exchange symmetry of identical particles to their spin. It states that bosons have integer spin, and fermions have half-integer spin. Anyons possess fractional spin. The above discussion generalizes readily to the case of N particles. Suppose there are N particles with quantum numbers n1, n2, ..., nN. If the particles are bosons, they occupy a totally symmetric state, which is symmetric under the exchange of any two particle labels: Here, the sum is taken over all different states under permutations p acting on N elements. The square root left to the sum is a normalizing constant. The quantity mn stands for the number of times each of the single-particle states n appears in the N-particle state. Note that Σn mn = N. In the same vein, fermions occupy totally antisymmetric states: Here, sgn(p) is the sign of each permutation (i.e. + 1 {\displaystyle +1} if p {\displaystyle p} is composed of an even number of transpositions, and − 1 {\displaystyle -1} if odd). Note that there is no Π n m n {\displaystyle \Pi _{n}m_{n}} term, because each single-particle state can appear only once in a fermionic state. Otherwise the sum would again be zero due to the antisymmetry, thus representing a physically impossible state. This is the Pauli exclusion principle for many particles. These states have been normalized so that Suppose there is a system of N bosons (fermions) in the symmetric (antisymmetric) state and a measurement is performed on some other set of discrete observables, m. In general, this yields some result m1 for one particle, m2 for another particle, and so forth. If the particles are bosons (fermions), the state after the measurement must remain symmetric (antisymmetric), i.e. The probability of obtaining a particular result for the m measurement is It can be shown that which verifies that the total probability is 1. The sum has to be restricted to ordered values of m1, ..., mN to ensure that each multi-particle state is not counted more than once. So far, the discussion has included only discrete observables. It can be extended to continuous observables, such as the position x. Recall that an eigenstate of a continuous observable represents an infinitesimal range of values of the observable, not a single value as with discrete observables. For instance, if a particle is in a state |ψ⟩, the probability of finding it in a region of volume dx surrounding some position x is As a result, the continuous eigenstates |x⟩ are normalized to the delta function instead of unity: Symmetric and antisymmetric multi-particle states can be constructed from continuous eigenstates in the same way as before. However, it is customary to use a different normalizing constant: A many-body wavefunction can be written, where the single-particle wavefunctions are defined, as usual, by The most important property of these wavefunctions is that exchanging any two of the coordinate variables changes the wavefunction by only a plus or minus sign. This is the manifestation of symmetry and antisymmetry in the wavefunction representation: The many-body wavefunction has the following significance: if the system is initially in a state with quantum numbers n1, ..., nN, and a position measurement is performed, the probability of finding particles in infinitesimal volumes near x1, x2, ..., xN is The factor of N! comes from our normalizing constant, which has been chosen so that, by analogy with single-particle wavefunctions, Because each integral runs over all possible values of x, each multi-particle state appears N! times in the integral. In other words, the probability associated with each event is evenly distributed across N! equivalent points in the integral space. Because it is usually more convenient to work with unrestricted integrals than restricted ones, the normalizing constant has been chosen to reflect this. Finally, antisymmetric wavefunction can be written as the determinant of a matrix, known as a Slater determinant: The Hilbert space for n {\displaystyle n} particles is given by the tensor product ⨂ n H {\textstyle \bigotimes _{n}H} . The permutation group of S n {\displaystyle S_{n}} acts on this space by permuting the entries. By definition the expectation values for an observable a {\displaystyle a} of n {\displaystyle n} indistinguishable particles should be invariant under these permutation. This means that for all ψ ∈ H {\displaystyle \psi \in H} and σ ∈ S n {\displaystyle \sigma \in S_{n}} or equivalently for each σ ∈ S n {\displaystyle \sigma \in S_{n}} Two states are equivalent whenever their expectation values coincide for all observables. If we restrict to observables of n {\displaystyle n} identical particles, and hence observables satisfying the equation above, we find that the following states (after normalization) are equivalent The equivalence classes are in bijective relation with irreducible subspaces of ⨂ n H {\textstyle \bigotimes _{n}H} under S n {\displaystyle S_{n}} . Two obvious irreducible subspaces are the one dimensional symmetric/bosonic subspace and anti-symmetric/fermionic subspace. There are however more types of irreducible subspaces. States associated with these other irreducible subspaces are called parastatistic states. Young tableaux provide a way to classify all of these irreducible subspaces. The indistinguishability of particles has a profound effect on their statistical properties. To illustrate this, consider a system of N distinguishable, non-interacting particles. Once again, let nj denote the state (i.e. quantum numbers) of particle j. If the particles have the same physical properties, the nj's run over the same range of values. Let ε(n) denote the energy of a particle in state n. As the particles do not interact, the total energy of the system is the sum of the single-particle energies. The partition function of the system is where k is Boltzmann's constant and T is the temperature. This expression can be factored to obtain where If the particles are identical, this equation is incorrect. Consider a state of the system, described by the single particle states [n1, ..., nN]. In the equation for Z, every possible permutation of the n's occurs once in the sum, even though each of these permutations is describing the same multi-particle state. Thus, the number of states has been over-counted. If the possibility of overlapping states is neglected, which is valid if the temperature is high, then the number of times each state is counted is approximately N!. The correct partition function is Note that this "high temperature" approximation does not distinguish between fermions and bosons. The discrepancy in the partition functions of distinguishable and indistinguishable particles was known as far back as the 19th century, before the advent of quantum mechanics. It leads to a difficulty known as the Gibbs paradox. Gibbs showed that in the equation Z = ξ, the entropy of a classical ideal gas is where V is the volume of the gas and f is some function of T alone. The problem with this result is that S is not extensive – if N and V are doubled, S does not double accordingly. Such a system does not obey the postulates of thermodynamics. Gibbs also showed that using Z = ξ/N! alters the result to which is perfectly extensive. However, the reason for this correction to the partition function remained obscure until the discovery of quantum mechanics There are important differences between the statistical behavior of bosons and fermions, which are described by Bose–Einstein statistics and Fermi–Dirac statistics respectively. Roughly speaking, bosons have a tendency to clump into the same quantum state, which underlies phenomena such as the laser, Bose–Einstein condensation, and superfluidity. Fermions, on the other hand, are forbidden from sharing quantum states, giving rise to systems such as the Fermi gas. This is known as the Pauli Exclusion Principle, and is responsible for much of chemistry, since the electrons in an atom (fermions) successively fill the many states within shells rather than all lying in the same lowest energy state. The differences between the statistical behavior of fermions, bosons, and distinguishable particles can be illustrated using a system of two particles. The particles are designated A and B. Each particle can exist in two possible states, labelled | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } , which have the same energy. The composite system can evolve in time, interacting with a noisy environment. Because the | 0 ⟩ {\displaystyle |0\rangle } and | 1 ⟩ {\displaystyle |1\rangle } states are energetically equivalent, neither state is favored, so this process has the effect of randomizing the states. (This is discussed in the article on quantum entanglement.) After some time, the composite system will have an equal probability of occupying each of the states available to it. The particle states are then measured. If A and B are distinguishable particles, then the composite system has four distinct states: | 0 ⟩ | 0 ⟩ {\displaystyle |0\rangle |0\rangle } , | 1 ⟩ | 1 ⟩ {\displaystyle |1\rangle |1\rangle } , | 0 ⟩ | 1 ⟩ {\displaystyle |0\rangle |1\rangle } , and | 1 ⟩ | 0 ⟩ {\displaystyle |1\rangle |0\rangle } . The probability of obtaining two particles in the | 0 ⟩ {\displaystyle |0\rangle } state is 0.25; the probability of obtaining two particles in the | 1 ⟩ {\displaystyle |1\rangle } state is 0.25; and the probability of obtaining one particle in the | 0 ⟩ {\displaystyle |0\rangle } state and the other in the | 1 ⟩ {\displaystyle |1\rangle } state is 0.5. If A and B are identical bosons, then the composite system has only three distinct states: | 0 ⟩ | 0 ⟩ {\displaystyle |0\rangle |0\rangle } , | 1 ⟩ | 1 ⟩ {\displaystyle |1\rangle |1\rangle } , and 1 2 ( | 0 ⟩ | 1 ⟩ + | 1 ⟩ | 0 ⟩ ) {\displaystyle {\frac {1}{\sqrt {2}}}(|0\rangle |1\rangle +|1\rangle |0\rangle )} . When the experiment is performed, the probability of obtaining two particles in the | 0 ⟩ {\displaystyle |0\rangle } state is now 0.33; the probability of obtaining two particles in the | 1 ⟩ {\displaystyle |1\rangle } state is 0.33; and the probability of obtaining one particle in the | 0 ⟩ {\displaystyle |0\rangle } state and the other in the | 1 ⟩ {\displaystyle |1\rangle } state is 0.33. Note that the probability of finding particles in the same state is relatively larger than in the distinguishable case. This demonstrates the tendency of bosons to "clump". If A and B are identical fermions, there is only one state available to the composite system: the totally antisymmetric state 1 2 ( | 0 ⟩ | 1 ⟩ − | 1 ⟩ | 0 ⟩ ) {\displaystyle {\frac {1}{\sqrt {2}}}(|0\rangle |1\rangle -|1\rangle |0\rangle )} . When the experiment is performed, one particle is always in the | 0 ⟩ {\displaystyle |0\rangle } state and the other is in the | 1 ⟩ {\displaystyle |1\rangle } state. The results are summarized in Table 1: As can be seen, even a system of two particles exhibits different statistical behaviors between distinguishable particles, bosons, and fermions. In the articles on Fermi–Dirac statistics and Bose–Einstein statistics, these principles are extended to large number of particles, with qualitatively similar results. To understand why particle statistics work the way that they do, note first that particles are point-localized excitations and that particles that are spacelike separated do not interact. In a flat d-dimensional space M, at any given time, the configuration of two identical particles can be specified as an element of M × M. If there is no overlap between the particles, so that they do not interact directly, then their locations must belong to the space [M × M] \ {coincident points}, the subspace with coincident points removed. The element (x, y) describes the configuration with particle I at x and particle II at y, while (y, x) describes the interchanged configuration. With identical particles, the state described by (x, y) ought to be indistinguishable from the state described by (y, x). Now consider the homotopy class of continuous paths from (x, y) to (y, x), within the space [M × M] \ {coincident points} . If M is R d {\displaystyle \mathbb {R} ^{d}} where d ≥ 3, then this homotopy class only has one element. If M is R 2 {\displaystyle \mathbb {R} ^{2}} , then this homotopy class has countably many elements (i.e. a counterclockwise interchange by half a turn, a counterclockwise interchange by one and a half turns, two and a half turns, etc., a clockwise interchange by half a turn, etc.). In particular, a counterclockwise interchange by half a turn is not homotopic to a clockwise interchange by half a turn. Lastly, if M is R {\displaystyle \mathbb {R} } , then this homotopy class is empty. Suppose first that d ≥ 3. The universal covering space of [M × M] \ {coincident points}, which is none other than [M × M] \ {coincident points} itself, only has two points which are physically indistinguishable from (x, y), namely (x, y) itself and (y, x). So, the only permissible interchange is to swap both particles. This interchange is an involution, so its only effect is to multiply the phase by a square root of 1. If the root is +1, then the points have Bose statistics, and if the root is –1, the points have Fermi statistics. In the case M = R 2 , {\displaystyle M=\mathbb {R} ^{2},} the universal covering space of [M × M] \ {coincident points} has infinitely many points that are physically indistinguishable from (x, y). This is described by the infinite cyclic group generated by making a counterclockwise half-turn interchange. Unlike the previous case, performing this interchange twice in a row does not recover the original state; so such an interchange can generically result in a multiplication by exp(iθ) for any real θ (by unitarity, the absolute value of the multiplication must be 1). This is called anyonic statistics. In fact, even with two distinguishable particles, even though (x, y) is now physically distinguishable from (y, x), the universal covering space still contains infinitely many points which are physically indistinguishable from the original point, now generated by a counterclockwise rotation by one full turn. This generator, then, results in a multiplication by exp(iφ). This phase factor here is called the mutual statistics. Finally, in the case M = R , {\displaystyle M=\mathbb {R} ,} the space [M × M] \ {coincident points} is not connected, so even if particle I and particle II are identical, they can still be distinguished via labels such as "the particle on the left" and "the particle on the right". There is no interchange symmetry here.
[ { "paragraph_id": 0, "text": "In quantum mechanics, identical particles (also called indistinguishable or indiscernible particles) are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to, elementary particles (such as electrons), composite subatomic particles (such as atomic nuclei), as well as atoms and molecules. Quasiparticles also behave in this way. Although all known indistinguishable particles only exist at the quantum scale, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics.", "title": "" }, { "paragraph_id": 1, "text": "There are two main categories of identical particles: bosons, which can share quantum states, and fermions, which cannot (as described by the Pauli exclusion principle). Examples of bosons are photons, gluons, phonons, helium-4 nuclei and all mesons. Examples of fermions are electrons, neutrinos, quarks, protons, neutrons, and helium-3 nuclei.", "title": "" }, { "paragraph_id": 2, "text": "The fact that particles can be identical has important consequences in statistical mechanics, where calculations rely on probabilistic arguments, which are sensitive to whether or not the objects being studied are identical. As a result, identical particles exhibit markedly different statistical behaviour from distinguishable particles. For example, the indistinguishability of particles has been proposed as a solution to Gibbs' mixing paradox.", "title": "" }, { "paragraph_id": 3, "text": "There are two methods for distinguishing between particles. The first method relies on differences in the intrinsic physical properties of the particles, such as mass, electric charge, and spin. If differences exist, it is possible to distinguish between the particles by measuring the relevant properties. However, it is an empirical fact that microscopic particles of the same species have completely equivalent physical properties. For instance, every electron in the universe has exactly the same electric charge; this is why it is possible to speak of such a thing as \"the charge of the electron\".", "title": "Distinguishing between particles" }, { "paragraph_id": 4, "text": "Even if the particles have equivalent physical properties, there remains a second method for distinguishing between particles, which is to track the trajectory of each particle. As long as the position of each particle can be measured with infinite precision (even when the particles collide), then there would be no ambiguity about which particle is which.", "title": "Distinguishing between particles" }, { "paragraph_id": 5, "text": "The problem with the second approach is that it contradicts the principles of quantum mechanics. According to quantum theory, the particles do not possess definite positions during the periods between measurements. Instead, they are governed by wavefunctions that give the probability of finding a particle at each position. As time passes, the wavefunctions tend to spread out and overlap. Once this happens, it becomes impossible to determine, in a subsequent measurement, which of the particle positions correspond to those measured earlier. The particles are then said to be indistinguishable.", "title": "Distinguishing between particles" }, { "paragraph_id": 6, "text": "What follows is an example to make the above discussion concrete, using the formalism developed in the article on the mathematical formulation of quantum mechanics.", "title": "Quantum mechanical description" }, { "paragraph_id": 7, "text": "Let n denote a complete set of (discrete) quantum numbers for specifying single-particle states (for example, for the particle in a box problem, take n to be the quantized wave vector of the wavefunction.) For simplicity, consider a system composed of two particles that are not interacting with each other. Suppose that one particle is in the state n1, and the other is in the state n2. The quantum state of the system is denoted by the expression", "title": "Quantum mechanical description" }, { "paragraph_id": 8, "text": "where the order of the tensor product matters ( if | n 2 ⟩ | n 1 ⟩ {\\displaystyle |n_{2}\\rangle |n_{1}\\rangle } , then the particle 1 occupies the state n2 while the particle 2 occupies the state n1). This is the canonical way of constructing a basis for a tensor product space H ⊗ H {\\displaystyle H\\otimes H} of the combined system from the individual spaces. This expression is valid for distinguishable particles, however, it is not appropriate for indistinguishable particles since | n 1 ⟩ | n 2 ⟩ {\\displaystyle |n_{1}\\rangle |n_{2}\\rangle } and | n 2 ⟩ | n 1 ⟩ {\\displaystyle |n_{2}\\rangle |n_{1}\\rangle } as a result of exchanging the particles are generally different states.", "title": "Quantum mechanical description" }, { "paragraph_id": 9, "text": "Two states are physically equivalent only if they differ at most by a complex phase factor. For two indistinguishable particles, a state before the particle exchange must be physically equivalent to the state after the exchange, so these two states differ at most by a complex phase factor. This fact suggests that a state for two indistinguishable (and non-interacting) particles is given by following two possibilities:", "title": "Quantum mechanical description" }, { "paragraph_id": 10, "text": "States where it is a sum are known as symmetric, while states involving the difference are called antisymmetric. More completely, symmetric states have the form", "title": "Quantum mechanical description" }, { "paragraph_id": 11, "text": "while antisymmetric states have the form", "title": "Quantum mechanical description" }, { "paragraph_id": 12, "text": "Note that if n1 and n2 are the same, the antisymmetric expression gives zero, which cannot be a state vector since it cannot be normalized. In other words, more than one identical particle cannot occupy an antisymmetric state (one antisymmetric state can be occupied only by one particle). This is known as the Pauli exclusion principle, and it is the fundamental reason behind the chemical properties of atoms and the stability of matter.", "title": "Quantum mechanical description" }, { "paragraph_id": 13, "text": "The importance of symmetric and antisymmetric states is ultimately based on empirical evidence. It appears to be a fact of nature that identical particles do not occupy states of a mixed symmetry, such as", "title": "Quantum mechanical description" }, { "paragraph_id": 14, "text": "There is actually an exception to this rule, which will be discussed later. On the other hand, it can be shown that the symmetric and antisymmetric states are in a sense special, by examining a particular symmetry of the multiple-particle states known as exchange symmetry.", "title": "Quantum mechanical description" }, { "paragraph_id": 15, "text": "Define a linear operator P, called the exchange operator. When it acts on a tensor product of two state vectors, it exchanges the values of the state vectors:", "title": "Quantum mechanical description" }, { "paragraph_id": 16, "text": "P is both Hermitian and unitary. Because it is unitary, it can be regarded as a symmetry operator. This symmetry may be described as the symmetry under the exchange of labels attached to the particles (i.e., to the single-particle Hilbert spaces).", "title": "Quantum mechanical description" }, { "paragraph_id": 17, "text": "Clearly, P 2 = 1 {\\displaystyle P^{2}=1} (the identity operator), so the eigenvalues of P are +1 and −1. The corresponding eigenvectors are the symmetric and antisymmetric states:", "title": "Quantum mechanical description" }, { "paragraph_id": 18, "text": "In other words, symmetric and antisymmetric states are essentially unchanged under the exchange of particle labels: they are only multiplied by a factor of +1 or −1, rather than being \"rotated\" somewhere else in the Hilbert space. This indicates that the particle labels have no physical meaning, in agreement with the earlier discussion on indistinguishability.", "title": "Quantum mechanical description" }, { "paragraph_id": 19, "text": "It will be recalled that P is Hermitian. As a result, it can be regarded as an observable of the system, which means that, in principle, a measurement can be performed to find out if a state is symmetric or antisymmetric. Furthermore, the equivalence of the particles indicates that the Hamiltonian can be written in a symmetrical form, such as", "title": "Quantum mechanical description" }, { "paragraph_id": 20, "text": "It is possible to show that such Hamiltonians satisfy the commutation relation", "title": "Quantum mechanical description" }, { "paragraph_id": 21, "text": "According to the Heisenberg equation, this means that the value of P is a constant of motion. If the quantum state is initially symmetric (antisymmetric), it will remain symmetric (antisymmetric) as the system evolves. Mathematically, this says that the state vector is confined to one of the two eigenspaces of P, and is not allowed to range over the entire Hilbert space. Thus, that eigenspace might as well be treated as the actual Hilbert space of the system. This is the idea behind the definition of Fock space.", "title": "Quantum mechanical description" }, { "paragraph_id": 22, "text": "The choice of symmetry or antisymmetry is determined by the species of particle. For example, symmetric states must always be used when describing photons or helium-4 atoms, and antisymmetric states when describing electrons or protons.", "title": "Quantum mechanical description" }, { "paragraph_id": 23, "text": "Particles which exhibit symmetric states are called bosons. The nature of symmetric states has important consequences for the statistical properties of systems composed of many identical bosons. These statistical properties are described as Bose–Einstein statistics.", "title": "Quantum mechanical description" }, { "paragraph_id": 24, "text": "Particles which exhibit antisymmetric states are called fermions. Antisymmetry gives rise to the Pauli exclusion principle, which forbids identical fermions from sharing the same quantum state. Systems of many identical fermions are described by Fermi–Dirac statistics.", "title": "Quantum mechanical description" }, { "paragraph_id": 25, "text": "Parastatistics are also possible.", "title": "Quantum mechanical description" }, { "paragraph_id": 26, "text": "In certain two-dimensional systems, mixed symmetry can occur. These exotic particles are known as anyons, and they obey fractional statistics. Experimental evidence for the existence of anyons exists in the fractional quantum Hall effect, a phenomenon observed in the two-dimensional electron gases that form the inversion layer of MOSFETs. There is another type of statistic, known as braid statistics, which are associated with particles known as plektons.", "title": "Quantum mechanical description" }, { "paragraph_id": 27, "text": "The spin-statistics theorem relates the exchange symmetry of identical particles to their spin. It states that bosons have integer spin, and fermions have half-integer spin. Anyons possess fractional spin.", "title": "Quantum mechanical description" }, { "paragraph_id": 28, "text": "The above discussion generalizes readily to the case of N particles. Suppose there are N particles with quantum numbers n1, n2, ..., nN. If the particles are bosons, they occupy a totally symmetric state, which is symmetric under the exchange of any two particle labels:", "title": "Quantum mechanical description" }, { "paragraph_id": 29, "text": "Here, the sum is taken over all different states under permutations p acting on N elements. The square root left to the sum is a normalizing constant. The quantity mn stands for the number of times each of the single-particle states n appears in the N-particle state. Note that Σn mn = N.", "title": "Quantum mechanical description" }, { "paragraph_id": 30, "text": "In the same vein, fermions occupy totally antisymmetric states:", "title": "Quantum mechanical description" }, { "paragraph_id": 31, "text": "Here, sgn(p) is the sign of each permutation (i.e. + 1 {\\displaystyle +1} if p {\\displaystyle p} is composed of an even number of transpositions, and − 1 {\\displaystyle -1} if odd). Note that there is no Π n m n {\\displaystyle \\Pi _{n}m_{n}} term, because each single-particle state can appear only once in a fermionic state. Otherwise the sum would again be zero due to the antisymmetry, thus representing a physically impossible state. This is the Pauli exclusion principle for many particles.", "title": "Quantum mechanical description" }, { "paragraph_id": 32, "text": "These states have been normalized so that", "title": "Quantum mechanical description" }, { "paragraph_id": 33, "text": "Suppose there is a system of N bosons (fermions) in the symmetric (antisymmetric) state", "title": "Quantum mechanical description" }, { "paragraph_id": 34, "text": "and a measurement is performed on some other set of discrete observables, m. In general, this yields some result m1 for one particle, m2 for another particle, and so forth. If the particles are bosons (fermions), the state after the measurement must remain symmetric (antisymmetric), i.e.", "title": "Quantum mechanical description" }, { "paragraph_id": 35, "text": "The probability of obtaining a particular result for the m measurement is", "title": "Quantum mechanical description" }, { "paragraph_id": 36, "text": "It can be shown that", "title": "Quantum mechanical description" }, { "paragraph_id": 37, "text": "which verifies that the total probability is 1. The sum has to be restricted to ordered values of m1, ..., mN to ensure that each multi-particle state is not counted more than once.", "title": "Quantum mechanical description" }, { "paragraph_id": 38, "text": "So far, the discussion has included only discrete observables. It can be extended to continuous observables, such as the position x.", "title": "Quantum mechanical description" }, { "paragraph_id": 39, "text": "Recall that an eigenstate of a continuous observable represents an infinitesimal range of values of the observable, not a single value as with discrete observables. For instance, if a particle is in a state |ψ⟩, the probability of finding it in a region of volume dx surrounding some position x is", "title": "Quantum mechanical description" }, { "paragraph_id": 40, "text": "As a result, the continuous eigenstates |x⟩ are normalized to the delta function instead of unity:", "title": "Quantum mechanical description" }, { "paragraph_id": 41, "text": "Symmetric and antisymmetric multi-particle states can be constructed from continuous eigenstates in the same way as before. However, it is customary to use a different normalizing constant:", "title": "Quantum mechanical description" }, { "paragraph_id": 42, "text": "A many-body wavefunction can be written,", "title": "Quantum mechanical description" }, { "paragraph_id": 43, "text": "where the single-particle wavefunctions are defined, as usual, by", "title": "Quantum mechanical description" }, { "paragraph_id": 44, "text": "The most important property of these wavefunctions is that exchanging any two of the coordinate variables changes the wavefunction by only a plus or minus sign. This is the manifestation of symmetry and antisymmetry in the wavefunction representation:", "title": "Quantum mechanical description" }, { "paragraph_id": 45, "text": "The many-body wavefunction has the following significance: if the system is initially in a state with quantum numbers n1, ..., nN, and a position measurement is performed, the probability of finding particles in infinitesimal volumes near x1, x2, ..., xN is", "title": "Quantum mechanical description" }, { "paragraph_id": 46, "text": "The factor of N! comes from our normalizing constant, which has been chosen so that, by analogy with single-particle wavefunctions,", "title": "Quantum mechanical description" }, { "paragraph_id": 47, "text": "Because each integral runs over all possible values of x, each multi-particle state appears N! times in the integral. In other words, the probability associated with each event is evenly distributed across N! equivalent points in the integral space. Because it is usually more convenient to work with unrestricted integrals than restricted ones, the normalizing constant has been chosen to reflect this.", "title": "Quantum mechanical description" }, { "paragraph_id": 48, "text": "Finally, antisymmetric wavefunction can be written as the determinant of a matrix, known as a Slater determinant:", "title": "Quantum mechanical description" }, { "paragraph_id": 49, "text": "The Hilbert space for n {\\displaystyle n} particles is given by the tensor product ⨂ n H {\\textstyle \\bigotimes _{n}H} . The permutation group of S n {\\displaystyle S_{n}} acts on this space by permuting the entries. By definition the expectation values for an observable a {\\displaystyle a} of n {\\displaystyle n} indistinguishable particles should be invariant under these permutation. This means that for all ψ ∈ H {\\displaystyle \\psi \\in H} and σ ∈ S n {\\displaystyle \\sigma \\in S_{n}}", "title": "Quantum mechanical description" }, { "paragraph_id": 50, "text": "or equivalently for each σ ∈ S n {\\displaystyle \\sigma \\in S_{n}}", "title": "Quantum mechanical description" }, { "paragraph_id": 51, "text": "Two states are equivalent whenever their expectation values coincide for all observables. If we restrict to observables of n {\\displaystyle n} identical particles, and hence observables satisfying the equation above, we find that the following states (after normalization) are equivalent", "title": "Quantum mechanical description" }, { "paragraph_id": 52, "text": "The equivalence classes are in bijective relation with irreducible subspaces of ⨂ n H {\\textstyle \\bigotimes _{n}H} under S n {\\displaystyle S_{n}} .", "title": "Quantum mechanical description" }, { "paragraph_id": 53, "text": "Two obvious irreducible subspaces are the one dimensional symmetric/bosonic subspace and anti-symmetric/fermionic subspace. There are however more types of irreducible subspaces. States associated with these other irreducible subspaces are called parastatistic states. Young tableaux provide a way to classify all of these irreducible subspaces.", "title": "Quantum mechanical description" }, { "paragraph_id": 54, "text": "The indistinguishability of particles has a profound effect on their statistical properties. To illustrate this, consider a system of N distinguishable, non-interacting particles. Once again, let nj denote the state (i.e. quantum numbers) of particle j. If the particles have the same physical properties, the nj's run over the same range of values. Let ε(n) denote the energy of a particle in state n. As the particles do not interact, the total energy of the system is the sum of the single-particle energies. The partition function of the system is", "title": "Statistical properties" }, { "paragraph_id": 55, "text": "where k is Boltzmann's constant and T is the temperature. This expression can be factored to obtain", "title": "Statistical properties" }, { "paragraph_id": 56, "text": "where", "title": "Statistical properties" }, { "paragraph_id": 57, "text": "If the particles are identical, this equation is incorrect. Consider a state of the system, described by the single particle states [n1, ..., nN]. In the equation for Z, every possible permutation of the n's occurs once in the sum, even though each of these permutations is describing the same multi-particle state. Thus, the number of states has been over-counted.", "title": "Statistical properties" }, { "paragraph_id": 58, "text": "If the possibility of overlapping states is neglected, which is valid if the temperature is high, then the number of times each state is counted is approximately N!. The correct partition function is", "title": "Statistical properties" }, { "paragraph_id": 59, "text": "Note that this \"high temperature\" approximation does not distinguish between fermions and bosons.", "title": "Statistical properties" }, { "paragraph_id": 60, "text": "The discrepancy in the partition functions of distinguishable and indistinguishable particles was known as far back as the 19th century, before the advent of quantum mechanics. It leads to a difficulty known as the Gibbs paradox. Gibbs showed that in the equation Z = ξ, the entropy of a classical ideal gas is", "title": "Statistical properties" }, { "paragraph_id": 61, "text": "where V is the volume of the gas and f is some function of T alone. The problem with this result is that S is not extensive – if N and V are doubled, S does not double accordingly. Such a system does not obey the postulates of thermodynamics.", "title": "Statistical properties" }, { "paragraph_id": 62, "text": "Gibbs also showed that using Z = ξ/N! alters the result to", "title": "Statistical properties" }, { "paragraph_id": 63, "text": "which is perfectly extensive. However, the reason for this correction to the partition function remained obscure until the discovery of quantum mechanics", "title": "Statistical properties" }, { "paragraph_id": 64, "text": "There are important differences between the statistical behavior of bosons and fermions, which are described by Bose–Einstein statistics and Fermi–Dirac statistics respectively. Roughly speaking, bosons have a tendency to clump into the same quantum state, which underlies phenomena such as the laser, Bose–Einstein condensation, and superfluidity. Fermions, on the other hand, are forbidden from sharing quantum states, giving rise to systems such as the Fermi gas. This is known as the Pauli Exclusion Principle, and is responsible for much of chemistry, since the electrons in an atom (fermions) successively fill the many states within shells rather than all lying in the same lowest energy state.", "title": "Statistical properties" }, { "paragraph_id": 65, "text": "The differences between the statistical behavior of fermions, bosons, and distinguishable particles can be illustrated using a system of two particles. The particles are designated A and B. Each particle can exist in two possible states, labelled | 0 ⟩ {\\displaystyle |0\\rangle } and | 1 ⟩ {\\displaystyle |1\\rangle } , which have the same energy.", "title": "Statistical properties" }, { "paragraph_id": 66, "text": "The composite system can evolve in time, interacting with a noisy environment. Because the | 0 ⟩ {\\displaystyle |0\\rangle } and | 1 ⟩ {\\displaystyle |1\\rangle } states are energetically equivalent, neither state is favored, so this process has the effect of randomizing the states. (This is discussed in the article on quantum entanglement.) After some time, the composite system will have an equal probability of occupying each of the states available to it. The particle states are then measured.", "title": "Statistical properties" }, { "paragraph_id": 67, "text": "If A and B are distinguishable particles, then the composite system has four distinct states: | 0 ⟩ | 0 ⟩ {\\displaystyle |0\\rangle |0\\rangle } , | 1 ⟩ | 1 ⟩ {\\displaystyle |1\\rangle |1\\rangle } , | 0 ⟩ | 1 ⟩ {\\displaystyle |0\\rangle |1\\rangle } , and | 1 ⟩ | 0 ⟩ {\\displaystyle |1\\rangle |0\\rangle } . The probability of obtaining two particles in the | 0 ⟩ {\\displaystyle |0\\rangle } state is 0.25; the probability of obtaining two particles in the | 1 ⟩ {\\displaystyle |1\\rangle } state is 0.25; and the probability of obtaining one particle in the | 0 ⟩ {\\displaystyle |0\\rangle } state and the other in the | 1 ⟩ {\\displaystyle |1\\rangle } state is 0.5.", "title": "Statistical properties" }, { "paragraph_id": 68, "text": "If A and B are identical bosons, then the composite system has only three distinct states: | 0 ⟩ | 0 ⟩ {\\displaystyle |0\\rangle |0\\rangle } , | 1 ⟩ | 1 ⟩ {\\displaystyle |1\\rangle |1\\rangle } , and 1 2 ( | 0 ⟩ | 1 ⟩ + | 1 ⟩ | 0 ⟩ ) {\\displaystyle {\\frac {1}{\\sqrt {2}}}(|0\\rangle |1\\rangle +|1\\rangle |0\\rangle )} . When the experiment is performed, the probability of obtaining two particles in the | 0 ⟩ {\\displaystyle |0\\rangle } state is now 0.33; the probability of obtaining two particles in the | 1 ⟩ {\\displaystyle |1\\rangle } state is 0.33; and the probability of obtaining one particle in the | 0 ⟩ {\\displaystyle |0\\rangle } state and the other in the | 1 ⟩ {\\displaystyle |1\\rangle } state is 0.33. Note that the probability of finding particles in the same state is relatively larger than in the distinguishable case. This demonstrates the tendency of bosons to \"clump\".", "title": "Statistical properties" }, { "paragraph_id": 69, "text": "If A and B are identical fermions, there is only one state available to the composite system: the totally antisymmetric state 1 2 ( | 0 ⟩ | 1 ⟩ − | 1 ⟩ | 0 ⟩ ) {\\displaystyle {\\frac {1}{\\sqrt {2}}}(|0\\rangle |1\\rangle -|1\\rangle |0\\rangle )} . When the experiment is performed, one particle is always in the | 0 ⟩ {\\displaystyle |0\\rangle } state and the other is in the | 1 ⟩ {\\displaystyle |1\\rangle } state.", "title": "Statistical properties" }, { "paragraph_id": 70, "text": "The results are summarized in Table 1:", "title": "Statistical properties" }, { "paragraph_id": 71, "text": "As can be seen, even a system of two particles exhibits different statistical behaviors between distinguishable particles, bosons, and fermions. In the articles on Fermi–Dirac statistics and Bose–Einstein statistics, these principles are extended to large number of particles, with qualitatively similar results.", "title": "Statistical properties" }, { "paragraph_id": 72, "text": "To understand why particle statistics work the way that they do, note first that particles are point-localized excitations and that particles that are spacelike separated do not interact. In a flat d-dimensional space M, at any given time, the configuration of two identical particles can be specified as an element of M × M. If there is no overlap between the particles, so that they do not interact directly, then their locations must belong to the space [M × M] \\ {coincident points}, the subspace with coincident points removed. The element (x, y) describes the configuration with particle I at x and particle II at y, while (y, x) describes the interchanged configuration. With identical particles, the state described by (x, y) ought to be indistinguishable from the state described by (y, x). Now consider the homotopy class of continuous paths from (x, y) to (y, x), within the space [M × M] \\ {coincident points} . If M is R d {\\displaystyle \\mathbb {R} ^{d}} where d ≥ 3, then this homotopy class only has one element. If M is R 2 {\\displaystyle \\mathbb {R} ^{2}} , then this homotopy class has countably many elements (i.e. a counterclockwise interchange by half a turn, a counterclockwise interchange by one and a half turns, two and a half turns, etc., a clockwise interchange by half a turn, etc.). In particular, a counterclockwise interchange by half a turn is not homotopic to a clockwise interchange by half a turn. Lastly, if M is R {\\displaystyle \\mathbb {R} } , then this homotopy class is empty.", "title": "The homotopy class" }, { "paragraph_id": 73, "text": "Suppose first that d ≥ 3. The universal covering space of [M × M] \\ {coincident points}, which is none other than [M × M] \\ {coincident points} itself, only has two points which are physically indistinguishable from (x, y), namely (x, y) itself and (y, x). So, the only permissible interchange is to swap both particles. This interchange is an involution, so its only effect is to multiply the phase by a square root of 1. If the root is +1, then the points have Bose statistics, and if the root is –1, the points have Fermi statistics.", "title": "The homotopy class" }, { "paragraph_id": 74, "text": "In the case M = R 2 , {\\displaystyle M=\\mathbb {R} ^{2},} the universal covering space of [M × M] \\ {coincident points} has infinitely many points that are physically indistinguishable from (x, y). This is described by the infinite cyclic group generated by making a counterclockwise half-turn interchange. Unlike the previous case, performing this interchange twice in a row does not recover the original state; so such an interchange can generically result in a multiplication by exp(iθ) for any real θ (by unitarity, the absolute value of the multiplication must be 1). This is called anyonic statistics. In fact, even with two distinguishable particles, even though (x, y) is now physically distinguishable from (y, x), the universal covering space still contains infinitely many points which are physically indistinguishable from the original point, now generated by a counterclockwise rotation by one full turn. This generator, then, results in a multiplication by exp(iφ). This phase factor here is called the mutual statistics.", "title": "The homotopy class" }, { "paragraph_id": 75, "text": "Finally, in the case M = R , {\\displaystyle M=\\mathbb {R} ,} the space [M × M] \\ {coincident points} is not connected, so even if particle I and particle II are identical, they can still be distinguished via labels such as \"the particle on the left\" and \"the particle on the right\". There is no interchange symmetry here.", "title": "The homotopy class" } ]
In quantum mechanics, identical particles are particles that cannot be distinguished from one another, even in principle. Species of identical particles include, but are not limited to, elementary particles, composite subatomic particles, as well as atoms and molecules. Quasiparticles also behave in this way. Although all known indistinguishable particles only exist at the quantum scale, there is no exhaustive list of all possible sorts of particles nor a clear-cut limit of applicability, as explored in quantum statistics. There are two main categories of identical particles: bosons, which can share quantum states, and fermions, which cannot. Examples of bosons are photons, gluons, phonons, helium-4 nuclei and all mesons. Examples of fermions are electrons, neutrinos, quarks, protons, neutrons, and helium-3 nuclei. The fact that particles can be identical has important consequences in statistical mechanics, where calculations rely on probabilistic arguments, which are sensitive to whether or not the objects being studied are identical. As a result, identical particles exhibit markedly different statistical behaviour from distinguishable particles. For example, the indistinguishability of particles has been proposed as a solution to Gibbs' mixing paradox.
2001-12-11T06:25:40Z
2023-10-24T05:10:55Z
[ "Template:Math", "Template:Use American English", "Template:Harvtxt", "Template:Cite journal", "Template:Citation", "Template:Short description", "Template:Tmath", "Template:ISBN", "Template:Mvar", "Template:Statistical mechanics", "Template:See also", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:Multiple issues" ]
https://en.wikipedia.org/wiki/Identical_particles
15,354
Interstitial cystitis
Interstitial cystitis (IC), a type of bladder pain syndrome (BPS), is chronic pain in the bladder and pelvic floor of unknown cause. It is the urologic chronic pelvic pain syndrome of women. Symptoms include feeling the need to urinate right away, needing to urinate often, and pain with sex. IC/BPS is associated with depression and lower quality of life. Many of those affected also have irritable bowel syndrome and fibromyalgia. The cause of interstitial cystitis is unknown. While it can, it does not typically run in a family. The diagnosis is usually based on the symptoms after ruling out other conditions. Typically the urine culture is negative. Ulceration or inflammation may be seen on cystoscopy. Other conditions which can produce similar symptoms include overactive bladder, urinary tract infection (UTI), sexually transmitted infections, prostatitis, endometriosis in females, and bladder cancer. There is no cure for interstitial cystitis and management of this condition can be challenging. Treatments that may improve symptoms include lifestyle changes, medications, or procedures. Lifestyle changes may include stopping smoking and reducing stress. Medications may include ibuprofen, pentosan polysulfate, or amitriptyline. Procedures may include bladder distention, nerve stimulation, or surgery. Pelvic floor exercises and long term antibiotics are not recommended. In the United States and Europe, it is estimated that around 0.5% of people are affected. Women are affected about five times as often as men. Onset is typically in middle age. The term "interstitial cystitis" first came into use in 1887. The most common symptoms of IC/BPS are suprapubic pain, urinary frequency, painful sexual intercourse, and waking up from sleep to urinate. In general, symptoms may include painful urination described as a burning sensation in the urethra during urination, pelvic pain that is worsened with the consumption of certain foods or drinks, urinary urgency, and pressure in the bladder or pelvis. Other frequently described symptoms are urinary hesitancy (needing to wait for the urinary stream to begin, often caused by pelvic floor dysfunction and tension), and discomfort and difficulty driving, working, exercising, or traveling. Pelvic pain experienced by those with IC typically worsens with filling of the urinary bladder and may improve with urination. During cystoscopy, 5–10% of people with IC are found to have Hunner's ulcers. A person with IC may have discomfort only in the urethra, while another might struggle with pain in the entire pelvis. Interstitial cystitis symptoms usually fall into one of two patterns: significant suprapubic pain with little frequency or a lesser amount of suprapubic pain but with increased urinary frequency. Some people with IC/BPS have been diagnosed with other conditions such as irritable bowel syndrome (IBS), fibromyalgia, chronic fatigue syndrome, allergies, Sjögren syndrome, which raises the possibility that interstitial cystitis may be caused by mechanisms that cause these other conditions. There is also some evidence of an association between urologic pain syndromes, such as IC/BPS and CP/CPPS, with non-celiac gluten sensitivity in some people. In addition, men with IC/PBS are frequently diagnosed as having chronic nonbacterial prostatitis, and there is an extensive overlap of symptoms and treatment between the two conditions, leading researchers to posit that the conditions may share the same cause and pathology. The cause of IC/BPS is not known. However, several explanations have been proposed and include the following: autoimmune theory, nerve theory, mast cell theory, leaky lining theory, infection theory, and a theory of production of a toxic substance in the urine. Other suggested etiological causes are neurologic, allergic, genetic, and stress-psychological. In addition, recent research shows that those with IC may have a substance in the urine that inhibits the growth of cells in the bladder epithelium. An infection may then predispose those people to develop IC. Evidence from clinical and laboratory studies confirms that mast cells play a central role in IC/BPS possibly due to their ability to release histamine and cause pain, swelling, scarring, and interfere with healing. Research has shown a proliferation of nerve fibers is present in the bladders of people with IC which is absent in the bladders of people who have not been diagnosed with IC. Regardless of the origin, most people with IC/BPS struggle with a damaged urothelium, or bladder lining. When the surface glycosaminoglycan (GAG) layer is damaged (via a urinary tract infection (UTI), excessive consumption of coffee or sodas, traumatic injury, etc.), urinary chemicals can "leak" into surrounding tissues, causing pain, inflammation, and urinary symptoms. Oral medications like pentosan polysulfate and medications placed directly into the bladder via a catheter sometimes work to repair and rebuild this damaged/wounded lining, allowing for a reduction in symptoms. Most literature supports the belief that IC's symptoms are associated with a defect in the bladder epithelium lining, allowing irritating substances in the urine to penetrate into the bladder—a breakdown of the bladder lining (also known as the adherence theory). Deficiency in this glycosaminoglycan layer on the surface of the bladder results in increased permeability of the underlying submucosal tissues. GP51 has been identified as a possible urinary biomarker for IC with significant variations in GP51 levels in those with IC when compared to individuals without interstitial cystitis. Numerous studies have noted the link between IC, anxiety, stress, hyper-responsiveness, and panic. Another proposed cause for interstitial cystitis is that the body's immune system attacks the bladder. Biopsies on the bladder walls of people with IC usually contain mast cells. Mast cells containing histamine packets gather when an allergic reaction is occurring. The body identifies the bladder wall as a foreign agent, and the histamine packets burst open and attack. The body attacks itself, which is the basis of autoimmune disorders. Additionally, IC may be triggered by an unknown toxin or stimulus which causes nerves in the bladder wall to fire uncontrollably. When they fire, they release substances called neuropeptides that induce a cascade of reactions that cause pain in the bladder wall. Some genetic subtypes, in some people, have been linked to the disorder. A diagnosis of IC/BPS is one of exclusion, as well as a review of clinical symptoms. The American Urological Association Guidelines recommend starting with a careful history of the person, physical examination and laboratory tests to assess and document symptoms of interstitial cytitis, as well as other potential disorders. The KCl test, also known as the potassium sensitivity test, is no longer recommended. The test uses a mild potassium solution to evaluate the integrity of the bladder wall. Though the latter is not specific for IC/BPS, it has been determined to be helpful in predicting the use of compounds, such as pentosan polysulphate, which are designed to help repair the GAG layer. For complicated cases, the use of hydrodistention with cystoscopy may be helpful. Researchers, however, determined that this visual examination of the bladder wall after stretching the bladder was not specific for IC/BPS and that the test, itself, can contribute to the development of small glomerulations (petechial hemorrhages) often found in IC/BPS. Thus, a diagnosis of IC/BPS is one of exclusion, as well as a review of clinical symptoms. In 2006, the ESSIC society proposed more rigorous and demanding diagnostic methods with specific classification criteria so that it cannot be confused with other, similar conditions. Specifically, they require that a person must have pain associated with the bladder, accompanied by one other urinary symptom. Thus, a person with just frequency or urgency would be excluded from a diagnosis. Secondly, they strongly encourage the exclusion of confusable diseases through an extensive and expensive series of tests including (A) a medical history and physical exam, (B) a dipstick urinalysis, various urine cultures, and a serum PSA in men over 40, (C) flowmetry and post-void residual urine volume by ultrasound scanning and (D) cystoscopy. A diagnosis of IC/BPS would be confirmed with a hydrodistention during cystoscopy with biopsy. They also propose a ranking system based upon the physical findings in the bladder. People would receive a numeric and letter based score based upon the severity of their disease as found during the hydrodistention. A score of 1–3 would relate to the severity of the disease and a rating of A–C represents biopsy findings. Thus, a person with 1A would have very mild symptoms and disease while a person with 3C would have the worst possible symptoms. Widely recognized scoring systems such as the O'Leary Sant symptom and problem score have emerged to evaluate the severity of IC symptoms such as pain and urinary symptoms. The symptoms of IC/BPS are often misdiagnosed as a urinary tract infection. However, IC/BPS has not been shown to be caused by a bacterial infection and antibiotics are an ineffective treatment. IC/BPS is commonly misdiagnosed as chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) in men, and endometriosis and uterine fibroids (in women). In 2011, the American Urological Association released consensus-based guideline for the diagnosis and treatment of interstitial cystitis. They include treatments ranging from conservative to more invasive: The American Urological Association guidelines also listed several discontinued treatments, including long-term oral antibiotics, intravesical bacillus Calmette Guerin, intravesical resiniferatoxin), high-pressure and long-duration hydrodistention, and systemic glucocorticoids. Bladder distension while under general anesthesia, also known as hydrodistention (a procedure which stretches the bladder capacity), has shown some success in reducing urinary frequency and giving short-term pain relief to those with IC. However, it is unknown exactly how this procedure causes pain relief. Recent studies show pressure on pelvic trigger points can relieve symptoms. The relief achieved by bladder distensions is only temporary (weeks or months), so is not viable as a long-term treatment for IC/BPS. The proportion of people with IC/BPS who experience relief from hydrodistention is currently unknown and evidence for this modality is limited by a lack of properly controlled studies. Bladder rupture and sepsis may be associated with prolonged, high-pressure hydrodistention. Bladder instillation of medication is one of the main forms of treatment of interstitial cystitis, but evidence for its effectiveness is currently limited. Advantages of this treatment approach include direct contact of the medication with the bladder and low systemic side effects due to poor absorption of the medication. Single medications or a mixture of medications are commonly used in bladder instillation preparations. Dimethyl sulfoxide (DMSO) is the only approved bladder instillation for IC/BPS yet it is much less frequently used in urology clinics. A 50% solution of DMSO had the potential to create irreversible muscle contraction. However, a lesser solution of 25% was found to be reversible. Long-term use of DMSO is questionable, as its mechanism of action is not fully understood though DMSO is thought to inhibit mast cells and may have anti-inflammatory, muscle-relaxing, and analgesic effects. Other agents used for bladder instillations to treat interstitial cystitis include: heparin, lidocaine, chondroitin sulfate, hyaluronic acid, pentosan polysulfate, oxybutynin, and botulinum toxin A. Preliminary evidence suggests these agents are efficacious in reducing symptoms of interstitial cystitis, but further study with larger, randomized, controlled clinical trials is needed. Diet modification is often recommended as a first-line method of self-treatment for interstitial cystitis, though rigorous controlled studies examining the impact diet has on interstitial cystitis signs and symptoms are currently lacking. An increase in fiber intake may alleviate symptoms. Individuals with interstitial cystitis often experience an increase in symptoms when they consume certain foods and beverages. Avoidance of these potential trigger foods and beverages such as caffeine-containing beverages including coffee, tea, and soda, alcoholic beverages, chocolate, citrus fruits, hot peppers, and artificial sweeteners may be helpful in alleviating symptoms. Diet triggers vary between individuals with IC; the best way for a person to discover his or her own triggers is to use an elimination diet. Sensitivity to trigger foods may be reduced if calcium glycerophosphate and/or sodium bicarbonate is consumed. The foundation of therapy is a modification of diet to help people avoid those foods which can further irritate the damaged bladder wall. The mechanism by which dietary modification benefits people with IC is unclear. Integration of neural signals from pelvic organs may mediate the effects of diet on symptoms of IC. The antihistamine hydroxyzine failed to demonstrate superiority over placebo in treatment of people with IC in a randomized, controlled, clinical trial. Amitriptyline has been shown to be effective in reducing symptoms such as chronic pelvic pain and nocturia in many people with IC/BPS with a median dose of 75 mg daily. In one study, the antidepressant duloxetine was found to be ineffective as a treatment, although a patent exists for use of duloxetine in the context of IC, and is known to relieve neuropathic pain. The calcineurin inhibitor cyclosporine A has been studied as a treatment for interstitial cystitis due to its immunosuppressive properties. A prospective randomized study found cyclosporine A to be more effective at treating IC symptoms than pentosan polysulfate, but also had more adverse effects. Oral pentosan polysulfate is believed to repair the protective glycosaminoglycan coating of the bladder, but studies have encountered mixed results when attempting to determine if the effect is statistically significant compared to placebo. Urologic pelvic pain syndromes, such as IC/BPS and CP/CPPS, are characterized by pelvic muscle tenderness, and symptoms may be reduced with pelvic myofascial physical therapy. This may leave the pelvic area in a sensitized condition, resulting in a loop of muscle tension and heightened neurological feedback (neural wind-up), a form of myofascial pain syndrome. Current protocols, such as the Wise–Anderson Protocol, largely focus on stretches to release overtensed muscles in the pelvic or anal area (commonly referred to as trigger points), physical therapy to the area, and progressive relaxation therapy to reduce causative stress. Pelvic floor dysfunction is a fairly new area of specialty for physical therapists worldwide. The goal of therapy is to relax and lengthen the pelvic floor muscles, rather than to tighten and/or strengthen them as is the goal of therapy for people with urinary incontinence. Thus, traditional exercises such as Kegel exercises, which are used to strengthen pelvic muscles, can provoke pain and additional muscle tension. A specially trained physical therapist can provide direct, hands on evaluation of the muscles, both externally and internally. A therapeutic wand can also be used to perform pelvic floor muscle myofascial release to provide relief. Surgery is rarely used for IC/BPS. Surgical intervention is very unpredictable, and is considered a treatment of last resort for severe refractory cases of interstitial cystitis. Some people who opt for surgical intervention continue to experience pain after surgery. Typical surgical interventions for refractory cases of IC/BPS include: bladder augmentation, urinary diversion, transurethral fulguration and resection of ulcers, and bladder removal (cystectomy). Neuromodulation can be successful in treating IC/BPS symptoms, including pain. One electronic pain-killing option is TENS. Percutaneous tibial nerve stimulation stimulators have also been used, with varying degrees of success. Percutaneous sacral nerve root stimulation was able to produce statistically significant improvements in several parameters, including pain. There is little evidence looking at the effects of alternative medicine though their use is common. There is tentative evidence that acupuncture may help pain associated with IC/BPS as part of other treatments. Despite a scarcity of controlled studies on alternative medicine and IC/BPS, "rather good results have been obtained" when acupuncture is combined with other treatments. Biofeedback, a relaxation technique aimed at helping people control functions of the autonomic nervous system, has shown some benefit in controlling pain associated with IC/BPS as part of a multimodal approach that may also include medication or hydrodistention of the bladder. IC/BPS has a profound impact on quality of life. A 2007 Finnish epidemiologic study showed that two-thirds of women at moderate to high risk of having interstitial cystitis reported impairment in their quality of life and 35% of people with IC reported an impact on their sexual life. A 2012 survey showed that among a group of adult women with symptoms of interstitial cystitis, 11% reported suicidal thoughts in the past two weeks. Other research has shown that the impact of IC/BPS on quality of life is severe and may be comparable to the quality of life experienced in end-stage kidney disease or rheumatoid arthritis. International recognition of interstitial cystitis has grown and international urology conferences to address the heterogeneity in diagnostic criteria have recently been held. IC/PBS is now recognized with an official disability code in the United States of America. IC/BPS affects men and women of all cultures, socioeconomic backgrounds, and ages. Although the disease was previously believed to be a condition of menopausal women, growing numbers of men and women are being diagnosed in their twenties and younger. IC/BPS is not a rare condition. Early research suggested that the number of IC/BPS cases ranged from 1 in 100,000 to 5.1 in 1,000 of the general population. In recent years, the scientific community has achieved a much deeper understanding of the epidemiology of interstitial cystitis. Recent studies have revealed that between 2.7 and 6.53 million women in the USA have symptoms of IC and up to 12% of women may have early symptoms of IC/BPS. Further study has estimated that the condition is far more prevalent in men than previously thought ranging from 1.8 to 4.2 million men having symptoms of interstitial cystitis. The condition is officially recognized as a disability in the United States. Philadelphia surgeon Joseph Parrish published the earliest record of interstitial cystitis in 1836 describing three cases of severe lower urinary tract symptoms without the presence of a bladder stone. The term "interstitial cystitis" was coined by Dr. Alexander Skene in 1887 to describe the disease. In 2002, the United States amended the Social Security Act to include interstitial cystitis as a disability. The first guideline for diagnosis and treatment of interstitial cystitis is released by a Japanese research team in 2009. The American Urological Association released the first American clinical practice guideline for diagnosing and treating IC/BPS in 2011 and has since (in 2014 and 2022) updated the guideline to maintain standard of care as knowledge of IC/BPS evolves. Originally called interstitial cystitis, this disorder was renamed to interstitial cystitis/bladder pain syndrome (IC/BPS) in the 2002–2010 timeframe. In 2007, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) began using the umbrella term urologic chronic pelvic pain syndrome (UCPPS) to refer to pelvic pain syndromes associated with the bladder (e.g., interstitial cystitis/bladder pain syndrome) and with the prostate gland or pelvis (e.g., chronic prostatitis/chronic pelvic pain syndrome). In 2008, terms currently in use in addition to IC/BPS include painful bladder syndrome, bladder pain syndrome and hypersensitive bladder syndrome, alone and in a variety of combinations. These different terms are being used in different parts of the world. The term "interstitial cystitis" is the primary term used in ICD-10 and MeSH. Grover et al. said, "The International Continence Society named the disease interstitial cystitis/painful bladder syndrome (IC/PBS) in 2002 [Abrams et al. 2002], while the Multinational Interstitial Cystitis Association have labeled it as painful bladder syndrome/interstitial cystitis (PBS/IC) [Hanno et al. 2005]. Recently, the European Society for the study of Interstitial Cystitis (ESSIC) proposed the moniker, 'bladder pain syndrome' (BPS) [van de Merwe et al. 2008]."
[ { "paragraph_id": 0, "text": "Interstitial cystitis (IC), a type of bladder pain syndrome (BPS), is chronic pain in the bladder and pelvic floor of unknown cause. It is the urologic chronic pelvic pain syndrome of women. Symptoms include feeling the need to urinate right away, needing to urinate often, and pain with sex. IC/BPS is associated with depression and lower quality of life. Many of those affected also have irritable bowel syndrome and fibromyalgia.", "title": "" }, { "paragraph_id": 1, "text": "The cause of interstitial cystitis is unknown. While it can, it does not typically run in a family. The diagnosis is usually based on the symptoms after ruling out other conditions. Typically the urine culture is negative. Ulceration or inflammation may be seen on cystoscopy. Other conditions which can produce similar symptoms include overactive bladder, urinary tract infection (UTI), sexually transmitted infections, prostatitis, endometriosis in females, and bladder cancer.", "title": "" }, { "paragraph_id": 2, "text": "There is no cure for interstitial cystitis and management of this condition can be challenging. Treatments that may improve symptoms include lifestyle changes, medications, or procedures. Lifestyle changes may include stopping smoking and reducing stress. Medications may include ibuprofen, pentosan polysulfate, or amitriptyline. Procedures may include bladder distention, nerve stimulation, or surgery. Pelvic floor exercises and long term antibiotics are not recommended.", "title": "" }, { "paragraph_id": 3, "text": "In the United States and Europe, it is estimated that around 0.5% of people are affected. Women are affected about five times as often as men. Onset is typically in middle age. The term \"interstitial cystitis\" first came into use in 1887.", "title": "" }, { "paragraph_id": 4, "text": "The most common symptoms of IC/BPS are suprapubic pain, urinary frequency, painful sexual intercourse, and waking up from sleep to urinate.", "title": "Signs and symptoms" }, { "paragraph_id": 5, "text": "In general, symptoms may include painful urination described as a burning sensation in the urethra during urination, pelvic pain that is worsened with the consumption of certain foods or drinks, urinary urgency, and pressure in the bladder or pelvis. Other frequently described symptoms are urinary hesitancy (needing to wait for the urinary stream to begin, often caused by pelvic floor dysfunction and tension), and discomfort and difficulty driving, working, exercising, or traveling. Pelvic pain experienced by those with IC typically worsens with filling of the urinary bladder and may improve with urination.", "title": "Signs and symptoms" }, { "paragraph_id": 6, "text": "During cystoscopy, 5–10% of people with IC are found to have Hunner's ulcers. A person with IC may have discomfort only in the urethra, while another might struggle with pain in the entire pelvis. Interstitial cystitis symptoms usually fall into one of two patterns: significant suprapubic pain with little frequency or a lesser amount of suprapubic pain but with increased urinary frequency.", "title": "Signs and symptoms" }, { "paragraph_id": 7, "text": "Some people with IC/BPS have been diagnosed with other conditions such as irritable bowel syndrome (IBS), fibromyalgia, chronic fatigue syndrome, allergies, Sjögren syndrome, which raises the possibility that interstitial cystitis may be caused by mechanisms that cause these other conditions. There is also some evidence of an association between urologic pain syndromes, such as IC/BPS and CP/CPPS, with non-celiac gluten sensitivity in some people.", "title": "Signs and symptoms" }, { "paragraph_id": 8, "text": "In addition, men with IC/PBS are frequently diagnosed as having chronic nonbacterial prostatitis, and there is an extensive overlap of symptoms and treatment between the two conditions, leading researchers to posit that the conditions may share the same cause and pathology.", "title": "Signs and symptoms" }, { "paragraph_id": 9, "text": "The cause of IC/BPS is not known. However, several explanations have been proposed and include the following: autoimmune theory, nerve theory, mast cell theory, leaky lining theory, infection theory, and a theory of production of a toxic substance in the urine. Other suggested etiological causes are neurologic, allergic, genetic, and stress-psychological. In addition, recent research shows that those with IC may have a substance in the urine that inhibits the growth of cells in the bladder epithelium. An infection may then predispose those people to develop IC. Evidence from clinical and laboratory studies confirms that mast cells play a central role in IC/BPS possibly due to their ability to release histamine and cause pain, swelling, scarring, and interfere with healing. Research has shown a proliferation of nerve fibers is present in the bladders of people with IC which is absent in the bladders of people who have not been diagnosed with IC.", "title": "Causes" }, { "paragraph_id": 10, "text": "Regardless of the origin, most people with IC/BPS struggle with a damaged urothelium, or bladder lining. When the surface glycosaminoglycan (GAG) layer is damaged (via a urinary tract infection (UTI), excessive consumption of coffee or sodas, traumatic injury, etc.), urinary chemicals can \"leak\" into surrounding tissues, causing pain, inflammation, and urinary symptoms. Oral medications like pentosan polysulfate and medications placed directly into the bladder via a catheter sometimes work to repair and rebuild this damaged/wounded lining, allowing for a reduction in symptoms. Most literature supports the belief that IC's symptoms are associated with a defect in the bladder epithelium lining, allowing irritating substances in the urine to penetrate into the bladder—a breakdown of the bladder lining (also known as the adherence theory). Deficiency in this glycosaminoglycan layer on the surface of the bladder results in increased permeability of the underlying submucosal tissues.", "title": "Causes" }, { "paragraph_id": 11, "text": "GP51 has been identified as a possible urinary biomarker for IC with significant variations in GP51 levels in those with IC when compared to individuals without interstitial cystitis.", "title": "Causes" }, { "paragraph_id": 12, "text": "Numerous studies have noted the link between IC, anxiety, stress, hyper-responsiveness, and panic. Another proposed cause for interstitial cystitis is that the body's immune system attacks the bladder. Biopsies on the bladder walls of people with IC usually contain mast cells. Mast cells containing histamine packets gather when an allergic reaction is occurring. The body identifies the bladder wall as a foreign agent, and the histamine packets burst open and attack. The body attacks itself, which is the basis of autoimmune disorders. Additionally, IC may be triggered by an unknown toxin or stimulus which causes nerves in the bladder wall to fire uncontrollably. When they fire, they release substances called neuropeptides that induce a cascade of reactions that cause pain in the bladder wall.", "title": "Causes" }, { "paragraph_id": 13, "text": "Some genetic subtypes, in some people, have been linked to the disorder.", "title": "Causes" }, { "paragraph_id": 14, "text": "A diagnosis of IC/BPS is one of exclusion, as well as a review of clinical symptoms. The American Urological Association Guidelines recommend starting with a careful history of the person, physical examination and laboratory tests to assess and document symptoms of interstitial cytitis, as well as other potential disorders.", "title": "Diagnosis" }, { "paragraph_id": 15, "text": "The KCl test, also known as the potassium sensitivity test, is no longer recommended. The test uses a mild potassium solution to evaluate the integrity of the bladder wall. Though the latter is not specific for IC/BPS, it has been determined to be helpful in predicting the use of compounds, such as pentosan polysulphate, which are designed to help repair the GAG layer.", "title": "Diagnosis" }, { "paragraph_id": 16, "text": "For complicated cases, the use of hydrodistention with cystoscopy may be helpful. Researchers, however, determined that this visual examination of the bladder wall after stretching the bladder was not specific for IC/BPS and that the test, itself, can contribute to the development of small glomerulations (petechial hemorrhages) often found in IC/BPS. Thus, a diagnosis of IC/BPS is one of exclusion, as well as a review of clinical symptoms.", "title": "Diagnosis" }, { "paragraph_id": 17, "text": "In 2006, the ESSIC society proposed more rigorous and demanding diagnostic methods with specific classification criteria so that it cannot be confused with other, similar conditions. Specifically, they require that a person must have pain associated with the bladder, accompanied by one other urinary symptom. Thus, a person with just frequency or urgency would be excluded from a diagnosis. Secondly, they strongly encourage the exclusion of confusable diseases through an extensive and expensive series of tests including (A) a medical history and physical exam, (B) a dipstick urinalysis, various urine cultures, and a serum PSA in men over 40, (C) flowmetry and post-void residual urine volume by ultrasound scanning and (D) cystoscopy. A diagnosis of IC/BPS would be confirmed with a hydrodistention during cystoscopy with biopsy.", "title": "Diagnosis" }, { "paragraph_id": 18, "text": "They also propose a ranking system based upon the physical findings in the bladder. People would receive a numeric and letter based score based upon the severity of their disease as found during the hydrodistention. A score of 1–3 would relate to the severity of the disease and a rating of A–C represents biopsy findings. Thus, a person with 1A would have very mild symptoms and disease while a person with 3C would have the worst possible symptoms. Widely recognized scoring systems such as the O'Leary Sant symptom and problem score have emerged to evaluate the severity of IC symptoms such as pain and urinary symptoms.", "title": "Diagnosis" }, { "paragraph_id": 19, "text": "The symptoms of IC/BPS are often misdiagnosed as a urinary tract infection. However, IC/BPS has not been shown to be caused by a bacterial infection and antibiotics are an ineffective treatment. IC/BPS is commonly misdiagnosed as chronic prostatitis/chronic pelvic pain syndrome (CP/CPPS) in men, and endometriosis and uterine fibroids (in women).", "title": "Diagnosis" }, { "paragraph_id": 20, "text": "In 2011, the American Urological Association released consensus-based guideline for the diagnosis and treatment of interstitial cystitis.", "title": "Treatment" }, { "paragraph_id": 21, "text": "They include treatments ranging from conservative to more invasive:", "title": "Treatment" }, { "paragraph_id": 22, "text": "The American Urological Association guidelines also listed several discontinued treatments, including long-term oral antibiotics, intravesical bacillus Calmette Guerin, intravesical resiniferatoxin), high-pressure and long-duration hydrodistention, and systemic glucocorticoids.", "title": "Treatment" }, { "paragraph_id": 23, "text": "Bladder distension while under general anesthesia, also known as hydrodistention (a procedure which stretches the bladder capacity), has shown some success in reducing urinary frequency and giving short-term pain relief to those with IC. However, it is unknown exactly how this procedure causes pain relief. Recent studies show pressure on pelvic trigger points can relieve symptoms. The relief achieved by bladder distensions is only temporary (weeks or months), so is not viable as a long-term treatment for IC/BPS. The proportion of people with IC/BPS who experience relief from hydrodistention is currently unknown and evidence for this modality is limited by a lack of properly controlled studies. Bladder rupture and sepsis may be associated with prolonged, high-pressure hydrodistention.", "title": "Treatment" }, { "paragraph_id": 24, "text": "Bladder instillation of medication is one of the main forms of treatment of interstitial cystitis, but evidence for its effectiveness is currently limited. Advantages of this treatment approach include direct contact of the medication with the bladder and low systemic side effects due to poor absorption of the medication. Single medications or a mixture of medications are commonly used in bladder instillation preparations. Dimethyl sulfoxide (DMSO) is the only approved bladder instillation for IC/BPS yet it is much less frequently used in urology clinics.", "title": "Treatment" }, { "paragraph_id": 25, "text": "A 50% solution of DMSO had the potential to create irreversible muscle contraction. However, a lesser solution of 25% was found to be reversible. Long-term use of DMSO is questionable, as its mechanism of action is not fully understood though DMSO is thought to inhibit mast cells and may have anti-inflammatory, muscle-relaxing, and analgesic effects. Other agents used for bladder instillations to treat interstitial cystitis include: heparin, lidocaine, chondroitin sulfate, hyaluronic acid, pentosan polysulfate, oxybutynin, and botulinum toxin A. Preliminary evidence suggests these agents are efficacious in reducing symptoms of interstitial cystitis, but further study with larger, randomized, controlled clinical trials is needed.", "title": "Treatment" }, { "paragraph_id": 26, "text": "Diet modification is often recommended as a first-line method of self-treatment for interstitial cystitis, though rigorous controlled studies examining the impact diet has on interstitial cystitis signs and symptoms are currently lacking. An increase in fiber intake may alleviate symptoms. Individuals with interstitial cystitis often experience an increase in symptoms when they consume certain foods and beverages. Avoidance of these potential trigger foods and beverages such as caffeine-containing beverages including coffee, tea, and soda, alcoholic beverages, chocolate, citrus fruits, hot peppers, and artificial sweeteners may be helpful in alleviating symptoms. Diet triggers vary between individuals with IC; the best way for a person to discover his or her own triggers is to use an elimination diet. Sensitivity to trigger foods may be reduced if calcium glycerophosphate and/or sodium bicarbonate is consumed. The foundation of therapy is a modification of diet to help people avoid those foods which can further irritate the damaged bladder wall.", "title": "Treatment" }, { "paragraph_id": 27, "text": "The mechanism by which dietary modification benefits people with IC is unclear. Integration of neural signals from pelvic organs may mediate the effects of diet on symptoms of IC.", "title": "Treatment" }, { "paragraph_id": 28, "text": "The antihistamine hydroxyzine failed to demonstrate superiority over placebo in treatment of people with IC in a randomized, controlled, clinical trial. Amitriptyline has been shown to be effective in reducing symptoms such as chronic pelvic pain and nocturia in many people with IC/BPS with a median dose of 75 mg daily. In one study, the antidepressant duloxetine was found to be ineffective as a treatment, although a patent exists for use of duloxetine in the context of IC, and is known to relieve neuropathic pain. The calcineurin inhibitor cyclosporine A has been studied as a treatment for interstitial cystitis due to its immunosuppressive properties. A prospective randomized study found cyclosporine A to be more effective at treating IC symptoms than pentosan polysulfate, but also had more adverse effects.", "title": "Treatment" }, { "paragraph_id": 29, "text": "Oral pentosan polysulfate is believed to repair the protective glycosaminoglycan coating of the bladder, but studies have encountered mixed results when attempting to determine if the effect is statistically significant compared to placebo.", "title": "Treatment" }, { "paragraph_id": 30, "text": "Urologic pelvic pain syndromes, such as IC/BPS and CP/CPPS, are characterized by pelvic muscle tenderness, and symptoms may be reduced with pelvic myofascial physical therapy.", "title": "Treatment" }, { "paragraph_id": 31, "text": "This may leave the pelvic area in a sensitized condition, resulting in a loop of muscle tension and heightened neurological feedback (neural wind-up), a form of myofascial pain syndrome. Current protocols, such as the Wise–Anderson Protocol, largely focus on stretches to release overtensed muscles in the pelvic or anal area (commonly referred to as trigger points), physical therapy to the area, and progressive relaxation therapy to reduce causative stress.", "title": "Treatment" }, { "paragraph_id": 32, "text": "Pelvic floor dysfunction is a fairly new area of specialty for physical therapists worldwide. The goal of therapy is to relax and lengthen the pelvic floor muscles, rather than to tighten and/or strengthen them as is the goal of therapy for people with urinary incontinence. Thus, traditional exercises such as Kegel exercises, which are used to strengthen pelvic muscles, can provoke pain and additional muscle tension. A specially trained physical therapist can provide direct, hands on evaluation of the muscles, both externally and internally.", "title": "Treatment" }, { "paragraph_id": 33, "text": "A therapeutic wand can also be used to perform pelvic floor muscle myofascial release to provide relief.", "title": "Treatment" }, { "paragraph_id": 34, "text": "Surgery is rarely used for IC/BPS. Surgical intervention is very unpredictable, and is considered a treatment of last resort for severe refractory cases of interstitial cystitis. Some people who opt for surgical intervention continue to experience pain after surgery. Typical surgical interventions for refractory cases of IC/BPS include: bladder augmentation, urinary diversion, transurethral fulguration and resection of ulcers, and bladder removal (cystectomy).", "title": "Treatment" }, { "paragraph_id": 35, "text": "Neuromodulation can be successful in treating IC/BPS symptoms, including pain. One electronic pain-killing option is TENS. Percutaneous tibial nerve stimulation stimulators have also been used, with varying degrees of success. Percutaneous sacral nerve root stimulation was able to produce statistically significant improvements in several parameters, including pain.", "title": "Treatment" }, { "paragraph_id": 36, "text": "There is little evidence looking at the effects of alternative medicine though their use is common. There is tentative evidence that acupuncture may help pain associated with IC/BPS as part of other treatments. Despite a scarcity of controlled studies on alternative medicine and IC/BPS, \"rather good results have been obtained\" when acupuncture is combined with other treatments.", "title": "Treatment" }, { "paragraph_id": 37, "text": "Biofeedback, a relaxation technique aimed at helping people control functions of the autonomic nervous system, has shown some benefit in controlling pain associated with IC/BPS as part of a multimodal approach that may also include medication or hydrodistention of the bladder.", "title": "Treatment" }, { "paragraph_id": 38, "text": "IC/BPS has a profound impact on quality of life. A 2007 Finnish epidemiologic study showed that two-thirds of women at moderate to high risk of having interstitial cystitis reported impairment in their quality of life and 35% of people with IC reported an impact on their sexual life. A 2012 survey showed that among a group of adult women with symptoms of interstitial cystitis, 11% reported suicidal thoughts in the past two weeks. Other research has shown that the impact of IC/BPS on quality of life is severe and may be comparable to the quality of life experienced in end-stage kidney disease or rheumatoid arthritis.", "title": "Prognosis" }, { "paragraph_id": 39, "text": "International recognition of interstitial cystitis has grown and international urology conferences to address the heterogeneity in diagnostic criteria have recently been held. IC/PBS is now recognized with an official disability code in the United States of America.", "title": "Prognosis" }, { "paragraph_id": 40, "text": "IC/BPS affects men and women of all cultures, socioeconomic backgrounds, and ages. Although the disease was previously believed to be a condition of menopausal women, growing numbers of men and women are being diagnosed in their twenties and younger. IC/BPS is not a rare condition. Early research suggested that the number of IC/BPS cases ranged from 1 in 100,000 to 5.1 in 1,000 of the general population. In recent years, the scientific community has achieved a much deeper understanding of the epidemiology of interstitial cystitis. Recent studies have revealed that between 2.7 and 6.53 million women in the USA have symptoms of IC and up to 12% of women may have early symptoms of IC/BPS. Further study has estimated that the condition is far more prevalent in men than previously thought ranging from 1.8 to 4.2 million men having symptoms of interstitial cystitis.", "title": "Epidemiology" }, { "paragraph_id": 41, "text": "The condition is officially recognized as a disability in the United States.", "title": "Epidemiology" }, { "paragraph_id": 42, "text": "Philadelphia surgeon Joseph Parrish published the earliest record of interstitial cystitis in 1836 describing three cases of severe lower urinary tract symptoms without the presence of a bladder stone. The term \"interstitial cystitis\" was coined by Dr. Alexander Skene in 1887 to describe the disease. In 2002, the United States amended the Social Security Act to include interstitial cystitis as a disability. The first guideline for diagnosis and treatment of interstitial cystitis is released by a Japanese research team in 2009. The American Urological Association released the first American clinical practice guideline for diagnosing and treating IC/BPS in 2011 and has since (in 2014 and 2022) updated the guideline to maintain standard of care as knowledge of IC/BPS evolves.", "title": "History" }, { "paragraph_id": 43, "text": "Originally called interstitial cystitis, this disorder was renamed to interstitial cystitis/bladder pain syndrome (IC/BPS) in the 2002–2010 timeframe. In 2007, the National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK) began using the umbrella term urologic chronic pelvic pain syndrome (UCPPS) to refer to pelvic pain syndromes associated with the bladder (e.g., interstitial cystitis/bladder pain syndrome) and with the prostate gland or pelvis (e.g., chronic prostatitis/chronic pelvic pain syndrome).", "title": "History" }, { "paragraph_id": 44, "text": "In 2008, terms currently in use in addition to IC/BPS include painful bladder syndrome, bladder pain syndrome and hypersensitive bladder syndrome, alone and in a variety of combinations. These different terms are being used in different parts of the world. The term \"interstitial cystitis\" is the primary term used in ICD-10 and MeSH. Grover et al. said, \"The International Continence Society named the disease interstitial cystitis/painful bladder syndrome (IC/PBS) in 2002 [Abrams et al. 2002], while the Multinational Interstitial Cystitis Association have labeled it as painful bladder syndrome/interstitial cystitis (PBS/IC) [Hanno et al. 2005]. Recently, the European Society for the study of Interstitial Cystitis (ESSIC) proposed the moniker, 'bladder pain syndrome' (BPS) [van de Merwe et al. 2008].\"", "title": "History" } ]
Interstitial cystitis (IC), a type of bladder pain syndrome (BPS), is chronic pain in the bladder and pelvic floor of unknown cause. It is the urologic chronic pelvic pain syndrome of women. Symptoms include feeling the need to urinate right away, needing to urinate often, and pain with sex. IC/BPS is associated with depression and lower quality of life. Many of those affected also have irritable bowel syndrome and fibromyalgia. The cause of interstitial cystitis is unknown. While it can, it does not typically run in a family. The diagnosis is usually based on the symptoms after ruling out other conditions. Typically the urine culture is negative. Ulceration or inflammation may be seen on cystoscopy. Other conditions which can produce similar symptoms include overactive bladder, urinary tract infection (UTI), sexually transmitted infections, prostatitis, endometriosis in females, and bladder cancer. There is no cure for interstitial cystitis and management of this condition can be challenging. Treatments that may improve symptoms include lifestyle changes, medications, or procedures. Lifestyle changes may include stopping smoking and reducing stress. Medications may include ibuprofen, pentosan polysulfate, or amitriptyline. Procedures may include bladder distention, nerve stimulation, or surgery. Pelvic floor exercises and long term antibiotics are not recommended. In the United States and Europe, it is estimated that around 0.5% of people are affected. Women are affected about five times as often as men. Onset is typically in middle age. The term "interstitial cystitis" first came into use in 1887.
2001-12-12T09:02:44Z
2023-12-27T09:25:22Z
[ "Template:Cite journal", "Template:Cite book", "Template:Citation", "Template:Curlie", "Template:Authority control", "Template:Infobox medical condition (new)", "Template:Reflist", "Template:Cite web", "Template:Urinary tract disease", "Template:Medical resources", "Template:Use dmy dates", "Template:Citation needed", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Interstitial_cystitis
15,355
ICI
ICI or Ici may refer to:
[ { "paragraph_id": 0, "text": "ICI or Ici may refer to:", "title": "" } ]
ICI or Ici may refer to:
2022-08-03T12:04:12Z
[ "Template:Disambiguation", "Template:Wiktionary", "Template:TOC right", "Template:Ill" ]
https://en.wikipedia.org/wiki/ICI
15,356
Imperial Chemical Industries
Imperial Chemical Industries (ICI) was a British chemical company. It was, for much of its history, the largest manufacturer in Britain. It was formed by the merger of four leading British chemical companies in 1926. Its headquarters were at Millbank in London. ICI was a constituent of the FT 30 and later the FTSE 100 indices. ICI made general chemicals, plastics, paints, pharmaceuticals and speciality products, including food ingredients, speciality polymers, electronic materials, fragrances and flavourings. In 2008, it was acquired by AkzoNobel, which immediately sold parts of ICI to Henkel and integrated ICI's remaining operations within its existing organisation. The company was founded in December 1926 from the merger of four companies: Brunner Mond, Nobel Explosives, the United Alkali Company, and British Dyestuffs Corporation. It established its head office at Millbank in London in 1928. Competing with DuPont and IG Farben, the new company produced chemicals, explosives, fertilisers, insecticides, dyestuffs, non-ferrous metals, and paints. In its first year turnover was £27 million. In the 1920s and 1930s, the company played a key role in the development of new chemical products, including the dyestuff phthalocyanine (1929), the acrylic plastic Perspex (1932), Dulux paints (1932, co-developed with DuPont), polyethylene (1937), and polyethylene terephthalate fibre known as Terylene (1941). In 1940, ICI started British Nylon Spinners as a joint venture with Courtaulds. ICI also owned the Sunbeam motorcycle business, which had come with Nobel Industries, and continued to build motorcycles until 1937. During the Second World War, ICI was involved with the United Kingdom's nuclear weapons programme codenamed Tube Alloys. In the 1940s and 1950s, the company established its pharmaceutical business and developed a number of key products, including Paludrine (1940s, an anti-malarial drug), halothane (1951, an inhalational anaesthetic agent), propofol (1977, an intravenous anaesthetic agent), Inderal (1965, a beta-blocker), tamoxifen (1978, a frequently used drug for breast cancer), and PEEK (1979, a high performance thermoplastic). ICI formed ICI Pharmaceuticals in 1957. ICI developed a fabric in the 1950s known as Crimplene, a thick polyester yarn used to make a fabric of the same name. The resulting cloth is heavy and wrinkle-resistant, and retains its shape well. The California-based fashion designer Edith Flagg was the first to import this fabric from Britain to the United States. During the first two years, ICI gave Flagg a large advertising budget to popularise the fabric across America. In 1960, Paul Chambers became the first chairman appointed from outside the company. Chambers employed the consultancy firm McKinsey to help with reorganising the company. His eight-year tenure saw export sales double, but his reputation was severely damaged by a failed takeover bid for Courtaulds in 1961–62. ICI was confronted with the nationalisation of its operations in Burma on 1 August 1962 as a consequence of the military coup. In 1964, ICI acquired British Nylon Spinners (BNS), the company it had jointly set up in 1940 with Courtaulds. ICI surrendered its 37.5 per cent holding in Courtaulds and paid Courtaulds £2 million a year for five years, "to take account of the future development expenditure of Courtaulds in the nylon field." In return, Courtaulds transferred to ICI their 50 per cent holding in BNS. BNS was absorbed into ICI's existing polyester operation, ICI Fibres. The acquisition included BNS production plants in Pontypool, Gloucester and Doncaster, together with research and development in Pontypool. Early pesticide development under ICI Plant Protection Division, with its plant at Yalding, Kent, research station at Jealott's Hill and HQ at Fernhurst Research Station included paraquat (1962, a herbicide), the insecticides pirimiphos-methyl in 1967 and pirimicarb in 1970, brodifacoum (a rodenticide) was developed in 1974; in the late 1970s, ICI was involved in the early development of synthetic pyrethroid insecticides such as lambda-cyhalothrin. Peter Allen was appointed chairman between 1968 and 1971. He presided over the purchase of Viyella. Profits shrank under his tenure. During his tenure, ICI created the wholly owned subsidiary Cleveland Potash Ltd, for the construction of Boulby Mine in Redcar and Cleveland, North Yorkshire. The first shaft was dug in 1968, with full production from 1976. ICI jointly owned the mine with Anglo American, and then with De Beers, before complete ownership was transferred to Israel Chemicals Ltd in 2002. Jack Callard was appointed chairman from 1971 to 1975. He almost doubled company profits between 1972 and 1974, and made ICI Britain's largest exporter. In 1971, the company acquired Atlas Chemical Industries Inc., a major American competitor. In 1977, Imperial Metal Industries was divested as an independent quoted company. From 1982 to 1987, the company was led by the charismatic John Harvey-Jones. Under his leadership, the company acquired the Beatrice Chemical Division in 1985 and Glidden Coatings & Resins, a leading paints business, in 1986. By the early 1990s, plans were carried out to demerge the company, as a result of increasing competition and internal complexity that caused heavy retrenchment and slowing innovation. In 1991, ICI sold the agricultural and merchandising operations of BritAg and Scottish Agricultural Industries to Norsk Hydro, and fought off a hostile takeover bid from Hanson, who had acquired 2.8 per cent of the company. It also divested its soda ash products arm to Brunner Mond, ending an association with the trade that had existed since the company's inception, one that had been inherited from the original Brunner, Mond & Co. Ltd. In 1992, the company sold its nylon business to DuPont. In 1993, the company de-merged its pharmaceutical bio-science businesses: pharmaceuticals, agrochemicals, specialities, seeds and biological products were all transferred into a new and independent company called Zeneca. Zeneca subsequently merged with Astra AB to form AstraZeneca. Charles Miller Smith was appointed CEO in 1994, one of the few times that someone from outside ICI had been appointed to lead the company, Smith having previously been a director at Unilever. Shortly afterwards, the company acquired a number of former Unilever businesses in an attempt to move away from its historical reliance on commodity chemicals. In 1995, ICI acquired the American paint companies Devoe Paints, Fuller-O'Brien Paints and Grow Group. In 1997, ICI acquired National Starch & Chemical, Quest International, Unichema, and Crosfield, the speciality chemicals businesses of Unilever for $8 billion. This step was part of a strategy to move away from cyclical bulk chemicals and to progress up the value chain to become a higher growth, higher margin business. Later that year it went on to buy Rutz & Huber, a Swiss paints business. Having taken on some £4 billion of debt to finance these acquisitions, the company had to sell off its commodity chemicals businesses: Having sold much of its historically profitable commodities businesses, and many of the new speciality businesses which it had failed to integrate, the company consisted mainly of the Dulux paints business, which quickly found itself the subject of a takeover by AkzoNobel. Dutch firm AkzoNobel (owner of Crown Berger paints) bid £7.2 billion (€10.66 billion or $14.5 billion) for ICI in June 2007. An area of concern about a potential deal was ICI's British pension fund, which had a deficit of almost £700 million and future liabilities of more than £9 billion at the time. Regulatory issues in the UK and other markets where Dulux and Crown Paints brands each have significant market share were also a cause for concern for the boards of ICI and AkzoNobel. In the UK, any combined operation without divestments would have seen AkzoNobel have a 54 per cent market share in the paint market. The initial bid was rejected by the ICI board and the majority of shareholders. However, a subsequent bid for £8 billion (€11.82 billion) was accepted by ICI in August 2007, pending approval by regulators. On 2 January 2008, completion of the takeover of ICI plc by AkzoNobel was announced. Shareholders of ICI received either £6.70 in cash or AkzoNobel loan notes to the value of £6.70 per one nominal ICI share. The adhesives business of ICI was transferred to Henkel as a result of the deal, while AkzoNobel agreed to sell its Crown Paints subsidiary to satisfy the concerns of the European Commissioner for Competition. The areas of concern regarding the ICI UK pension scheme were addressed by ICI and AkzoNobel. ICI operated a number of chemical sites around the world. In the UK, the main plants were as follows: An ICI subsidiary called Duperial operated in Argentine from 1928 to 1995, when it was renamed ICI. Established in the city of San Lorenzo, Santa Fe, it operates an integrated production site with commercial offices in Buenos Aires. Since 2009 it has made sulphuric acid with ISO certification under the company name Akzo Nobel Functional Chemicals S.A. It also had an operation at Palmira, Mendoza, for its Wine Chemicals Division, that manufactured tartaric acid, wine alcohol and grapeseed oil from natural raw material coming from the wine industry in the provinces of Mendoza and San Juan. This operation held 10% world market share for tartaric acid. It was sold in 2008 and currently operates as Derivados Vínicos S.A. (DERVINSA). The subsidiary ICI Australia Ltd established the Dry Creek Saltfields at Dry Creek north of Adelaide, South Australia, in 1940, with an associated soda ash plant at nearby Osborne. In 1989, these operations were sold to Penrice Soda Products. An ICI plant was built at Botany Bay in New South Wales in the 1940s and was part of the Orica demerger in 1997. The plant once manufactured paints, plastics and industrial chemicals such as solvents. It was responsible for the Botany Bay Groundwater Plume contamination of a local aquifer. In 1968 a subsidiary of Imperial Chemical Industries (ICI) was established in then-East Pakistan. After Bangladesh gained independence in 1971, the company was incorporated on 24 January 1973 as ICI Bangladesh Manufacturers Limited and also as Public Limited Company. The company divested its investment in Bangladesh and was renamed as Advanced Chemical Industries Limited (ACI Limited) on 5 May 1992. The company sold its insect control, air care and toilet care brands to SC Johnson & Son in 2015. Currently Advanced Chemical Industries (ACI) Limited is one of the largest conglomerates in Bangladesh with a multinational heritage operating across the country. The company operates through three reporting divisions: Pharmaceuticals, Consumer Brands and Agribusiness. ICI maintained offices in Colombo importing and supplying chemicals for manufacturers in Ceylon. In 1964, following import restrictions that allowed only locally owned subsidiaries of multinational companies to gain import licenses, Chemical Industries (Colombo) Limited was formed as an ICI subsidiary with 49% ICI ownership and remaining held public. The subsidiary ICI New Zealand provided substantial quantities of chemical products – including swimming pool chemicals, commercial healthcare products, herbicides and pesticides for use within New Zealand and the neighbouring Pacific Islands. A fire at the ICI New Zealand store in Mount Wellington, Auckland, on 21 December 1984, killed an ICI employee and caused major health concerns. Over 200 firefighters were exposed to toxic smoke and effluents during the firefighting efforts. Six firefighters retired for medical reasons as a result of the fire. This incident was a major event in the history of the New Zealand Fire Service and subject to a formal investigation, led by future Chief Justice Sian Elias. The fire was a trigger for major reforms of the service; direct consequences included improved protective clothing for firefighters, a standard safety protocol for major incidents, the introduction of dedicated fireground safety officers, and changes to occupational health regulations.
[ { "paragraph_id": 0, "text": "Imperial Chemical Industries (ICI) was a British chemical company. It was, for much of its history, the largest manufacturer in Britain. It was formed by the merger of four leading British chemical companies in 1926. Its headquarters were at Millbank in London. ICI was a constituent of the FT 30 and later the FTSE 100 indices.", "title": "" }, { "paragraph_id": 1, "text": "ICI made general chemicals, plastics, paints, pharmaceuticals and speciality products, including food ingredients, speciality polymers, electronic materials, fragrances and flavourings. In 2008, it was acquired by AkzoNobel, which immediately sold parts of ICI to Henkel and integrated ICI's remaining operations within its existing organisation.", "title": "" }, { "paragraph_id": 2, "text": "The company was founded in December 1926 from the merger of four companies: Brunner Mond, Nobel Explosives, the United Alkali Company, and British Dyestuffs Corporation. It established its head office at Millbank in London in 1928. Competing with DuPont and IG Farben, the new company produced chemicals, explosives, fertilisers, insecticides, dyestuffs, non-ferrous metals, and paints. In its first year turnover was £27 million.", "title": "History" }, { "paragraph_id": 3, "text": "In the 1920s and 1930s, the company played a key role in the development of new chemical products, including the dyestuff phthalocyanine (1929), the acrylic plastic Perspex (1932), Dulux paints (1932, co-developed with DuPont), polyethylene (1937), and polyethylene terephthalate fibre known as Terylene (1941). In 1940, ICI started British Nylon Spinners as a joint venture with Courtaulds.", "title": "History" }, { "paragraph_id": 4, "text": "ICI also owned the Sunbeam motorcycle business, which had come with Nobel Industries, and continued to build motorcycles until 1937.", "title": "History" }, { "paragraph_id": 5, "text": "During the Second World War, ICI was involved with the United Kingdom's nuclear weapons programme codenamed Tube Alloys.", "title": "History" }, { "paragraph_id": 6, "text": "In the 1940s and 1950s, the company established its pharmaceutical business and developed a number of key products, including Paludrine (1940s, an anti-malarial drug), halothane (1951, an inhalational anaesthetic agent), propofol (1977, an intravenous anaesthetic agent), Inderal (1965, a beta-blocker), tamoxifen (1978, a frequently used drug for breast cancer), and PEEK (1979, a high performance thermoplastic). ICI formed ICI Pharmaceuticals in 1957.", "title": "History" }, { "paragraph_id": 7, "text": "ICI developed a fabric in the 1950s known as Crimplene, a thick polyester yarn used to make a fabric of the same name. The resulting cloth is heavy and wrinkle-resistant, and retains its shape well. The California-based fashion designer Edith Flagg was the first to import this fabric from Britain to the United States. During the first two years, ICI gave Flagg a large advertising budget to popularise the fabric across America.", "title": "History" }, { "paragraph_id": 8, "text": "In 1960, Paul Chambers became the first chairman appointed from outside the company. Chambers employed the consultancy firm McKinsey to help with reorganising the company. His eight-year tenure saw export sales double, but his reputation was severely damaged by a failed takeover bid for Courtaulds in 1961–62.", "title": "History" }, { "paragraph_id": 9, "text": "ICI was confronted with the nationalisation of its operations in Burma on 1 August 1962 as a consequence of the military coup.", "title": "History" }, { "paragraph_id": 10, "text": "In 1964, ICI acquired British Nylon Spinners (BNS), the company it had jointly set up in 1940 with Courtaulds. ICI surrendered its 37.5 per cent holding in Courtaulds and paid Courtaulds £2 million a year for five years, \"to take account of the future development expenditure of Courtaulds in the nylon field.\" In return, Courtaulds transferred to ICI their 50 per cent holding in BNS. BNS was absorbed into ICI's existing polyester operation, ICI Fibres. The acquisition included BNS production plants in Pontypool, Gloucester and Doncaster, together with research and development in Pontypool.", "title": "History" }, { "paragraph_id": 11, "text": "Early pesticide development under ICI Plant Protection Division, with its plant at Yalding, Kent, research station at Jealott's Hill and HQ at Fernhurst Research Station included paraquat (1962, a herbicide), the insecticides pirimiphos-methyl in 1967 and pirimicarb in 1970, brodifacoum (a rodenticide) was developed in 1974; in the late 1970s, ICI was involved in the early development of synthetic pyrethroid insecticides such as lambda-cyhalothrin.", "title": "History" }, { "paragraph_id": 12, "text": "Peter Allen was appointed chairman between 1968 and 1971. He presided over the purchase of Viyella. Profits shrank under his tenure. During his tenure, ICI created the wholly owned subsidiary Cleveland Potash Ltd, for the construction of Boulby Mine in Redcar and Cleveland, North Yorkshire. The first shaft was dug in 1968, with full production from 1976. ICI jointly owned the mine with Anglo American, and then with De Beers, before complete ownership was transferred to Israel Chemicals Ltd in 2002.", "title": "History" }, { "paragraph_id": 13, "text": "Jack Callard was appointed chairman from 1971 to 1975. He almost doubled company profits between 1972 and 1974, and made ICI Britain's largest exporter. In 1971, the company acquired Atlas Chemical Industries Inc., a major American competitor.", "title": "History" }, { "paragraph_id": 14, "text": "In 1977, Imperial Metal Industries was divested as an independent quoted company. From 1982 to 1987, the company was led by the charismatic John Harvey-Jones. Under his leadership, the company acquired the Beatrice Chemical Division in 1985 and Glidden Coatings & Resins, a leading paints business, in 1986.", "title": "History" }, { "paragraph_id": 15, "text": "By the early 1990s, plans were carried out to demerge the company, as a result of increasing competition and internal complexity that caused heavy retrenchment and slowing innovation. In 1991, ICI sold the agricultural and merchandising operations of BritAg and Scottish Agricultural Industries to Norsk Hydro, and fought off a hostile takeover bid from Hanson, who had acquired 2.8 per cent of the company. It also divested its soda ash products arm to Brunner Mond, ending an association with the trade that had existed since the company's inception, one that had been inherited from the original Brunner, Mond & Co. Ltd.", "title": "History" }, { "paragraph_id": 16, "text": "In 1992, the company sold its nylon business to DuPont. In 1993, the company de-merged its pharmaceutical bio-science businesses: pharmaceuticals, agrochemicals, specialities, seeds and biological products were all transferred into a new and independent company called Zeneca. Zeneca subsequently merged with Astra AB to form AstraZeneca.", "title": "History" }, { "paragraph_id": 17, "text": "Charles Miller Smith was appointed CEO in 1994, one of the few times that someone from outside ICI had been appointed to lead the company, Smith having previously been a director at Unilever. Shortly afterwards, the company acquired a number of former Unilever businesses in an attempt to move away from its historical reliance on commodity chemicals. In 1995, ICI acquired the American paint companies Devoe Paints, Fuller-O'Brien Paints and Grow Group. In 1997, ICI acquired National Starch & Chemical, Quest International, Unichema, and Crosfield, the speciality chemicals businesses of Unilever for $8 billion. This step was part of a strategy to move away from cyclical bulk chemicals and to progress up the value chain to become a higher growth, higher margin business. Later that year it went on to buy Rutz & Huber, a Swiss paints business.", "title": "History" }, { "paragraph_id": 18, "text": "Having taken on some £4 billion of debt to finance these acquisitions, the company had to sell off its commodity chemicals businesses:", "title": "History" }, { "paragraph_id": 19, "text": "Having sold much of its historically profitable commodities businesses, and many of the new speciality businesses which it had failed to integrate, the company consisted mainly of the Dulux paints business, which quickly found itself the subject of a takeover by AkzoNobel.", "title": "History" }, { "paragraph_id": 20, "text": "Dutch firm AkzoNobel (owner of Crown Berger paints) bid £7.2 billion (€10.66 billion or $14.5 billion) for ICI in June 2007. An area of concern about a potential deal was ICI's British pension fund, which had a deficit of almost £700 million and future liabilities of more than £9 billion at the time. Regulatory issues in the UK and other markets where Dulux and Crown Paints brands each have significant market share were also a cause for concern for the boards of ICI and AkzoNobel. In the UK, any combined operation without divestments would have seen AkzoNobel have a 54 per cent market share in the paint market. The initial bid was rejected by the ICI board and the majority of shareholders. However, a subsequent bid for £8 billion (€11.82 billion) was accepted by ICI in August 2007, pending approval by regulators.", "title": "History" }, { "paragraph_id": 21, "text": "On 2 January 2008, completion of the takeover of ICI plc by AkzoNobel was announced. Shareholders of ICI received either £6.70 in cash or AkzoNobel loan notes to the value of £6.70 per one nominal ICI share. The adhesives business of ICI was transferred to Henkel as a result of the deal, while AkzoNobel agreed to sell its Crown Paints subsidiary to satisfy the concerns of the European Commissioner for Competition. The areas of concern regarding the ICI UK pension scheme were addressed by ICI and AkzoNobel.", "title": "History" }, { "paragraph_id": 22, "text": "ICI operated a number of chemical sites around the world. In the UK, the main plants were as follows:", "title": "Operations" }, { "paragraph_id": 23, "text": "An ICI subsidiary called Duperial operated in Argentine from 1928 to 1995, when it was renamed ICI.", "title": "Operations" }, { "paragraph_id": 24, "text": "Established in the city of San Lorenzo, Santa Fe, it operates an integrated production site with commercial offices in Buenos Aires. Since 2009 it has made sulphuric acid with ISO certification under the company name Akzo Nobel Functional Chemicals S.A.", "title": "Operations" }, { "paragraph_id": 25, "text": "It also had an operation at Palmira, Mendoza, for its Wine Chemicals Division, that manufactured tartaric acid, wine alcohol and grapeseed oil from natural raw material coming from the wine industry in the provinces of Mendoza and San Juan. This operation held 10% world market share for tartaric acid. It was sold in 2008 and currently operates as Derivados Vínicos S.A. (DERVINSA).", "title": "Operations" }, { "paragraph_id": 26, "text": "The subsidiary ICI Australia Ltd established the Dry Creek Saltfields at Dry Creek north of Adelaide, South Australia, in 1940, with an associated soda ash plant at nearby Osborne. In 1989, these operations were sold to Penrice Soda Products. An ICI plant was built at Botany Bay in New South Wales in the 1940s and was part of the Orica demerger in 1997.", "title": "Operations" }, { "paragraph_id": 27, "text": "The plant once manufactured paints, plastics and industrial chemicals such as solvents. It was responsible for the Botany Bay Groundwater Plume contamination of a local aquifer.", "title": "Operations" }, { "paragraph_id": 28, "text": "In 1968 a subsidiary of Imperial Chemical Industries (ICI) was established in then-East Pakistan. After Bangladesh gained independence in 1971, the company was incorporated on 24 January 1973 as ICI Bangladesh Manufacturers Limited and also as Public Limited Company. The company divested its investment in Bangladesh and was renamed as Advanced Chemical Industries Limited (ACI Limited) on 5 May 1992. The company sold its insect control, air care and toilet care brands to SC Johnson & Son in 2015. Currently Advanced Chemical Industries (ACI) Limited is one of the largest conglomerates in Bangladesh with a multinational heritage operating across the country. The company operates through three reporting divisions: Pharmaceuticals, Consumer Brands and Agribusiness.", "title": "Operations" }, { "paragraph_id": 29, "text": "ICI maintained offices in Colombo importing and supplying chemicals for manufacturers in Ceylon. In 1964, following import restrictions that allowed only locally owned subsidiaries of multinational companies to gain import licenses, Chemical Industries (Colombo) Limited was formed as an ICI subsidiary with 49% ICI ownership and remaining held public.", "title": "Operations" }, { "paragraph_id": 30, "text": "The subsidiary ICI New Zealand provided substantial quantities of chemical products – including swimming pool chemicals, commercial healthcare products, herbicides and pesticides for use within New Zealand and the neighbouring Pacific Islands.", "title": "Operations" }, { "paragraph_id": 31, "text": "A fire at the ICI New Zealand store in Mount Wellington, Auckland, on 21 December 1984, killed an ICI employee and caused major health concerns. Over 200 firefighters were exposed to toxic smoke and effluents during the firefighting efforts. Six firefighters retired for medical reasons as a result of the fire. This incident was a major event in the history of the New Zealand Fire Service and subject to a formal investigation, led by future Chief Justice Sian Elias. The fire was a trigger for major reforms of the service; direct consequences included improved protective clothing for firefighters, a standard safety protocol for major incidents, the introduction of dedicated fireground safety officers, and changes to occupational health regulations.", "title": "Operations" } ]
Imperial Chemical Industries (ICI) was a British chemical company. It was, for much of its history, the largest manufacturer in Britain. It was formed by the merger of four leading British chemical companies in 1926. Its headquarters were at Millbank in London. ICI was a constituent of the FT 30 and later the FTSE 100 indices. ICI made general chemicals, plastics, paints, pharmaceuticals and speciality products, including food ingredients, speciality polymers, electronic materials, fragrances and flavourings. In 2008, it was acquired by AkzoNobel, which immediately sold parts of ICI to Henkel and integrated ICI's remaining operations within its existing organisation.
2001-12-12T11:45:09Z
2023-11-28T08:03:59Z
[ "Template:Use dmy dates", "Template:Infobox company", "Template:Portal", "Template:Cite news", "Template:Cite journal", "Template:Authority control", "Template:Cite press release", "Template:Citation", "Template:Imperial Chemical Industries Chairman", "Template:Lead too short", "Template:Use British English", "Template:Reflist", "Template:Dead link", "Template:Short description", "Template:Citation needed", "Template:Cite web", "Template:Cite book", "Template:Cite ODNB", "Template:FT 30 constituents" ]
https://en.wikipedia.org/wiki/Imperial_Chemical_Industries
15,357
Imperial Airways
Imperial Airways was an early British commercial long-range airline, operating from 1924 to 1939 and principally serving the British Empire routes to South Africa, India, Australia and the Far East, including Malaya and Hong Kong. Passengers were typically businessmen or colonial administrators, and most flights carried about 20 passengers or fewer. Accidents were frequent: in the first six years, 32 people died in seven incidents. Imperial Airways never achieved the levels of technological innovation of its competitors and was merged into the British Overseas Airways Corporation (BOAC) in 1939. BOAC in turn merged with the British European Airways (BEA) in 1974 to form British Airways. The establishment of Imperial Airways occurred in the context of facilitating overseas settlement by making travel to and from the colonies quicker, and that flight would also speed up colonial government and trade that was until then dependent upon ships. The launch of the airline followed a burst of air route surveying in the British Empire after the First World War, and after some experimental (and often dangerous) long-distance flying to the margins of Empire. Imperial Airways was created against a background of stiff competition from French and German airlines that enjoyed heavy government subsidies and following the advice of the government's Hambling Committee (formally known as the C.A.T Subsidies Committee) under Sir Herbert Hambling. The committee, set up on 2 January 1923, produced a report on 15 February 1923 recommending that four of the largest existing airlines, the Instone Air Line Company, owned by shipping magnate Samuel Instone, Noel Pemberton Billing's British Marine Air Navigation (part of the Supermarine flying-boat company), the Daimler Airway, under the management of George Edward Woods, and Handley Page Transport Co Ltd., should be merged. It was hoped that this would create a company which could compete against French and German competition and would be strong enough to develop Britain's external air services while minimizing government subsidies for duplicated services. With this in view, a £1m subsidy over ten years was offered to encourage the merger. Agreement was made between the President of the Air Council and the British, Foreign and Colonial Corporation on 3 December 1923 for the company, under the title of the 'Imperial Air Transport Company' to acquire existing air transport services in the UK. The agreement set out the government subsidies for the new company: £137,000 in the first year diminishing to £32,000 in the tenth year as well as minimum mileages to be achieved and penalties if these weren't met. Imperial Airways Limited was formed on 31 March 1924 with equipment from each contributing concern: British Marine Air Navigation Company Ltd, the Daimler Airway, Handley Page Transport Ltd and the Instone Air Line Ltd. Sir Eric Geddes was appointed the chairman of the board with one director from each of the merged companies. The government had appointed two directors, Hambling (who was also President of the Institute of Bankers) and Major John Hills, a former Treasury Financial Secretary. The land operations were based at Croydon Airport to the south of London. IAL immediately discontinued its predecessors' service to points north of London, the airline being focused on international and imperial service rather than domestic. Thereafter the only IAL aircraft operating 'North of Watford' were charter flights. Industrial troubles with the pilots delayed the start of services until 26 April 1924, when a daily London–Paris route was opened with a de Havilland DH.34. Thereafter the task of expanding the routes between England and the Continent began, with Southampton–Guernsey on 1 May 1924, London-Brussels–Cologne on 3 May, London–Amsterdam on 2 June 1924, and a summer service from London–Paris–Basel–Zürich on 17 June 1924. The first new airliner ordered by Imperial Airways, was the Handley Page W8f City of Washington, delivered on 3 November 1924. In the first year of operation the company carried 11,395 passengers and 212,380 letters. In April 1925, the film The Lost World became the first film to be screened for passengers on a scheduled airliner flight when it was shown on the London-Paris route. Between 16 November 1925 and 13 March 1926, Alan Cobham made an Imperial Airways' route survey flight from the UK to Cape Town and back in the Armstrong Siddeley Jaguar–powered de Havilland DH.50J floatplane G-EBFO. The outward route was London–Paris–Marseille–Pisa–Taranto–Athens–Sollum–Cairo–Luxor–Aswan–Wadi Halfa–Atbara–Khartoum–Malakal–Mongalla–Jinja–Kisumu–Tabora–Abercorn–Ndola–Broken Hill–Livingstone–Bulawayo–Pretoria–Johannesburg–Kimberley–Bloemfontein–Cape Town. On his return Cobham was awarded the Air Force Cross for his services to aviation. On 30 June 1926, Cobham took off from the River Medway at Rochester in G-EBFO to make an Imperial Airways route survey for a service to Melbourne, arriving on 15 August 1926. He left Melbourne on 29 August 1926, and, after completing 28,000 nautical miles (32,000 mi; 52,000 km) in 320 hours flying time over 78 days, he alighted on the Thames at Westminster on 1 October 1926. Cobham was met by the Secretary of State for Air, Sir Samuel Hoare, and was subsequently knighted by HM King George V. On 27 December 1926, Imperial Airways de Havilland DH.66 Hercules G-EBMX City of Delhi left Croydon for a survey flight to India. The flight reached Karachi on 6 January 1927 and Delhi on 8 January 1927. The aircraft was named by Lady Irwin, wife of the Viceroy, on 10 January 1927. The return flight left on 1 February 1927 and arrived at Heliopolis, Cairo on 7 February 1927. The flying time from Croydon to Delhi was 62 hours 27 minutes and Delhi to Heliopolis 32 hours 50 minutes. Regular services on the Cairo to Basra route began on 12 January 1927 using DH.66 aircraft, replacing the previous RAF mail flight. Following two years of negotiations with the Persian authorities regarding overflight rights, a London to Karachi service started on 30 March 1929, taking seven days and consisting of a flight from London to Basle, a train to Genoa and a Short S.8 Calcutta flying boats to Alexandria, a train to Cairo and finally a DH.66 flight to Karachi. The route was extended as far as Delhi on 29 December 1929. The route across Europe and the Mediterranean changed many times over the next few years but almost always involved a rail journey. In April 1931 an experimental London-Australia air mail flight took place; the mail was transferred at the Dutch East Indies, after the DH66 City of Cairo crashed landed in Timor, on the 19th April, having run out of fuel, and took 26 days in total to reach Sydney. For the passenger flight leaving London on 1 October 1932, the Eastern route was switched from the Persian to the Arabian side of the Persian Gulf, and Handley Page HP 42 airliners were introduced on the Cairo to Karachi sector. The move saw the establishment of an airport and rest house, Mahatta Fort, in the Trucial State of Sharjah now part of the United Arab Emirates. On 29 May 1933 an England to Australia survey flight took off, operated by Imperial Airways Armstrong Whitworth Atalanta G-ABTL Astraea. Major H. G. Brackley, Imperial Airways' Air Superintendent, was in charge of the flight. Astraea flew Croydon-Paris-Lyon-Rome-Brindisi-Athens-Alexandria-Cairo where it followed the normal route to Karachi then onwards to Jodhpur-Delhi-Calcutta-Akyab-Rangoon-Bangkok-Prachuab-Alor Setar-Singapore-Palembang-Batavia-Sourabaya-Bima-Koepang-Bathurst Island-Darwin-Newcastle Waters-Camooweal-Cloncurry-Longreach-Roma-Toowoomba reaching Eagle Farm, Brisbane on 23 June. Sydney was visited on 26 June, Canberra on 28 June and Melbourne on 29 June. There followed a rapid eastern extension. The first London to Calcutta service departed on 1 July 1933, the first London to Rangoon service on 23 September 1933, the first London to Singapore service on 9 December 1933, and the first London to Brisbane service on 8 December 1934, with Qantas responsible for the Singapore to Brisbane sector. (The 1934 start was for mail; passenger flights to Brisbane began the following April.) The first London to Hong Kong passengers departed London on 14 March 1936 following the establishment of a branch from Penang to Hong Kong. On 28 February 1931 a weekly service began between London and Mwanza on Lake Victoria in Tanganyika as part of the proposed route to Cape Town. On 9 December 1931 the Imperial Airways' service for Central Africa was extended experimentally to Cape Town for the carriage of Christmas mail. The aircraft used on the last sector, DH66 G-AARY City of Karachi arrived in Cape Town on 21 December 1931. On 20 January 1932 a mail-only route to London to Cape Town was opened. On 27 April this route was opened to passengers and took 10 days. In early 1933 Atalantas replaced the DH.66s on the Kisumu to Cape Town sector of the London to Cape Town route. On 9 February 1936 the trans-Africa route was opened by Imperial Airways between Khartoum and Kano in Nigeria. This route was extended to Lagos on 15 October 1936. In 1937 with the introduction of Short Empire flying boats built at Short Brothers, Imperial Airways could offer a through-service from Southampton to the Empire. The journey to the Cape was via Marseille, Rome, Brindisi, Athens, Alexandria, Khartoum, Port Bell, Kisumu and onwards by land-based craft to Nairobi, Mbeya and eventually Cape Town. Survey flights were also made across the Atlantic and to New Zealand. By mid-1937 Imperial had completed its thousandth service to the Empire. Starting in 1938 Empire flying boats also flew between Britain and Australia via India and the Middle East. In March 1939 three Shorts a week left Southampton for Australia, reaching Sydney after ten days of flying and nine overnight stops. Three more left for South Africa, taking six flying days to Durban. Imperial's aircraft were small, most seating fewer than twenty passengers; about 50,000 passengers used Imperial Airways in the 1930s. Most passengers on intercontinental routes or on services within and between British colonies were men in colonial administration, business or research. To begin with only the wealthy could afford to fly, but passenger lists gradually diversified. Travel experiences related to flying low and slow, and were reported enthusiastically in newspapers, magazines and books. There was opportunity for sightseeing from the air and at stops. Imperial Airways stationed its all-male flight deck crew, cabin crew and ground crew along the length of its routes. Specialist engineers and inspectors – and ground crew on rotation or leave – travelled on the airline without generating any seat revenue. Several air crew lost their lives in accidents. At the end of the 1930s crew numbers approximated 3,000. All crew were expected to be ambassadors for Britain and the British Empire. In 1934 the government began negotiations with Imperial Airways to establish a service (Empire Air Mail Scheme) to carry mail by air on routes served by the airline. Indirectly these negotiations led to the dismissal in 1936 of Sir Christopher Bullock, the Permanent Under-Secretary at the Air Ministry, who was found by a Board of Inquiry to have abused his position in seeking a position on the board of the company while these negotiations were in train. The government, including the Prime Minister, regretted the decision to dismiss him, later finding that, in fact, no corruption was alleged and sought Bullock's reinstatement which he declined. The Empire Air Mail Programme started in July 1937, delivering anywhere for 11/2 d./oz. By mid-1938 a hundred tons of mail had been delivered to India and a similar amount to Africa. In the same year, construction was started on the Empire Terminal in Victoria, London, designed by A. Lakeman and with a statue by Eric Broadbent, Speed Wings Over the World gracing the portal above the main entrance. From the terminal there were train connections to Imperial's flying boats at Southampton and coaches to its landplane base at Croydon Airport. The terminal operated as recently as 1980. To help promote use of the Air Mail service, in June and July 1939, Imperial Airways participated with Pan American Airways in providing a special "around the world" service; Imperial carried the souvenir mail from Foynes, Ireland, to Hong Kong, out of the eastbound New York to New York route. Pan American provided service from New York to Foynes (departing 24 June, via the first flight of Northern FAM 18) and Hong Kong to San Francisco (via FAM 14), and United Airlines carried it on the final leg from San Francisco to New York, arriving on 28 July. Captain H. W. C. Alger was the pilot for the inaugural air mail flight carrying mail from England to Australia for the first time on the Short Empire flyingboat Castor for Imperial Airways' Empires Air Routes, in 1937. In November 2016, 80 years later, the Crete2Cape Vintage Air Rally flew this old route with fifteen vintage aeroplanes – a celebration of the skill and determination of these early aviators. Before the outbreak of war on 1 September 1939, the British government had already implemented the Air Navigation (Restriction in Time of War) Order 1939. That ordered military takeover of most civilian airfields in the UK, cessation of all private flying without individual flight permits, and other emergency measures. It was administered by a statutory department of the Air Ministry titled National Air Communications (NAC). By 1 September 1939, the aircraft and administrations of Imperial Airways and British Airways Ltd were physically transferred to Bristol (Whitchurch) Airport, to be operated jointly by NAC. On 1 April 1940, Imperial Airways Ltd and British Airways Ltd were officially combined into a new company, British Overseas Airways Corporation (BOAC), that had already been formed on 24 November 1939 with retrospective financial arrangements. Imperial Airways operated many types of aircraft from its formation on 1 April 1924 until 1 April 1940 when all aircraft still in service were transferred to BOAC.
[ { "paragraph_id": 0, "text": "Imperial Airways was an early British commercial long-range airline, operating from 1924 to 1939 and principally serving the British Empire routes to South Africa, India, Australia and the Far East, including Malaya and Hong Kong. Passengers were typically businessmen or colonial administrators, and most flights carried about 20 passengers or fewer. Accidents were frequent: in the first six years, 32 people died in seven incidents. Imperial Airways never achieved the levels of technological innovation of its competitors and was merged into the British Overseas Airways Corporation (BOAC) in 1939. BOAC in turn merged with the British European Airways (BEA) in 1974 to form British Airways.", "title": "" }, { "paragraph_id": 1, "text": "The establishment of Imperial Airways occurred in the context of facilitating overseas settlement by making travel to and from the colonies quicker, and that flight would also speed up colonial government and trade that was until then dependent upon ships. The launch of the airline followed a burst of air route surveying in the British Empire after the First World War, and after some experimental (and often dangerous) long-distance flying to the margins of Empire.", "title": "Background" }, { "paragraph_id": 2, "text": "Imperial Airways was created against a background of stiff competition from French and German airlines that enjoyed heavy government subsidies and following the advice of the government's Hambling Committee (formally known as the C.A.T Subsidies Committee) under Sir Herbert Hambling. The committee, set up on 2 January 1923, produced a report on 15 February 1923 recommending that four of the largest existing airlines, the Instone Air Line Company, owned by shipping magnate Samuel Instone, Noel Pemberton Billing's British Marine Air Navigation (part of the Supermarine flying-boat company), the Daimler Airway, under the management of George Edward Woods, and Handley Page Transport Co Ltd., should be merged. It was hoped that this would create a company which could compete against French and German competition and would be strong enough to develop Britain's external air services while minimizing government subsidies for duplicated services. With this in view, a £1m subsidy over ten years was offered to encourage the merger. Agreement was made between the President of the Air Council and the British, Foreign and Colonial Corporation on 3 December 1923 for the company, under the title of the 'Imperial Air Transport Company' to acquire existing air transport services in the UK. The agreement set out the government subsidies for the new company: £137,000 in the first year diminishing to £32,000 in the tenth year as well as minimum mileages to be achieved and penalties if these weren't met.", "title": "Formation" }, { "paragraph_id": 3, "text": "Imperial Airways Limited was formed on 31 March 1924 with equipment from each contributing concern: British Marine Air Navigation Company Ltd, the Daimler Airway, Handley Page Transport Ltd and the Instone Air Line Ltd. Sir Eric Geddes was appointed the chairman of the board with one director from each of the merged companies. The government had appointed two directors, Hambling (who was also President of the Institute of Bankers) and Major John Hills, a former Treasury Financial Secretary.", "title": "Formation" }, { "paragraph_id": 4, "text": "The land operations were based at Croydon Airport to the south of London. IAL immediately discontinued its predecessors' service to points north of London, the airline being focused on international and imperial service rather than domestic. Thereafter the only IAL aircraft operating 'North of Watford' were charter flights.", "title": "Formation" }, { "paragraph_id": 5, "text": "Industrial troubles with the pilots delayed the start of services until 26 April 1924, when a daily London–Paris route was opened with a de Havilland DH.34. Thereafter the task of expanding the routes between England and the Continent began, with Southampton–Guernsey on 1 May 1924, London-Brussels–Cologne on 3 May, London–Amsterdam on 2 June 1924, and a summer service from London–Paris–Basel–Zürich on 17 June 1924. The first new airliner ordered by Imperial Airways, was the Handley Page W8f City of Washington, delivered on 3 November 1924. In the first year of operation the company carried 11,395 passengers and 212,380 letters. In April 1925, the film The Lost World became the first film to be screened for passengers on a scheduled airliner flight when it was shown on the London-Paris route.", "title": "Formation" }, { "paragraph_id": 6, "text": "Between 16 November 1925 and 13 March 1926, Alan Cobham made an Imperial Airways' route survey flight from the UK to Cape Town and back in the Armstrong Siddeley Jaguar–powered de Havilland DH.50J floatplane G-EBFO. The outward route was London–Paris–Marseille–Pisa–Taranto–Athens–Sollum–Cairo–Luxor–Aswan–Wadi Halfa–Atbara–Khartoum–Malakal–Mongalla–Jinja–Kisumu–Tabora–Abercorn–Ndola–Broken Hill–Livingstone–Bulawayo–Pretoria–Johannesburg–Kimberley–Bloemfontein–Cape Town. On his return Cobham was awarded the Air Force Cross for his services to aviation.", "title": "Empire services" }, { "paragraph_id": 7, "text": "On 30 June 1926, Cobham took off from the River Medway at Rochester in G-EBFO to make an Imperial Airways route survey for a service to Melbourne, arriving on 15 August 1926. He left Melbourne on 29 August 1926, and, after completing 28,000 nautical miles (32,000 mi; 52,000 km) in 320 hours flying time over 78 days, he alighted on the Thames at Westminster on 1 October 1926. Cobham was met by the Secretary of State for Air, Sir Samuel Hoare, and was subsequently knighted by HM King George V.", "title": "Empire services" }, { "paragraph_id": 8, "text": "On 27 December 1926, Imperial Airways de Havilland DH.66 Hercules G-EBMX City of Delhi left Croydon for a survey flight to India. The flight reached Karachi on 6 January 1927 and Delhi on 8 January 1927. The aircraft was named by Lady Irwin, wife of the Viceroy, on 10 January 1927. The return flight left on 1 February 1927 and arrived at Heliopolis, Cairo on 7 February 1927. The flying time from Croydon to Delhi was 62 hours 27 minutes and Delhi to Heliopolis 32 hours 50 minutes.", "title": "Empire services" }, { "paragraph_id": 9, "text": "Regular services on the Cairo to Basra route began on 12 January 1927 using DH.66 aircraft, replacing the previous RAF mail flight. Following two years of negotiations with the Persian authorities regarding overflight rights, a London to Karachi service started on 30 March 1929, taking seven days and consisting of a flight from London to Basle, a train to Genoa and a Short S.8 Calcutta flying boats to Alexandria, a train to Cairo and finally a DH.66 flight to Karachi. The route was extended as far as Delhi on 29 December 1929. The route across Europe and the Mediterranean changed many times over the next few years but almost always involved a rail journey.", "title": "Empire services" }, { "paragraph_id": 10, "text": "In April 1931 an experimental London-Australia air mail flight took place; the mail was transferred at the Dutch East Indies, after the DH66 City of Cairo crashed landed in Timor, on the 19th April, having run out of fuel, and took 26 days in total to reach Sydney. For the passenger flight leaving London on 1 October 1932, the Eastern route was switched from the Persian to the Arabian side of the Persian Gulf, and Handley Page HP 42 airliners were introduced on the Cairo to Karachi sector. The move saw the establishment of an airport and rest house, Mahatta Fort, in the Trucial State of Sharjah now part of the United Arab Emirates.", "title": "Empire services" }, { "paragraph_id": 11, "text": "On 29 May 1933 an England to Australia survey flight took off, operated by Imperial Airways Armstrong Whitworth Atalanta G-ABTL Astraea. Major H. G. Brackley, Imperial Airways' Air Superintendent, was in charge of the flight. Astraea flew Croydon-Paris-Lyon-Rome-Brindisi-Athens-Alexandria-Cairo where it followed the normal route to Karachi then onwards to Jodhpur-Delhi-Calcutta-Akyab-Rangoon-Bangkok-Prachuab-Alor Setar-Singapore-Palembang-Batavia-Sourabaya-Bima-Koepang-Bathurst Island-Darwin-Newcastle Waters-Camooweal-Cloncurry-Longreach-Roma-Toowoomba reaching Eagle Farm, Brisbane on 23 June. Sydney was visited on 26 June, Canberra on 28 June and Melbourne on 29 June.", "title": "Empire services" }, { "paragraph_id": 12, "text": "There followed a rapid eastern extension. The first London to Calcutta service departed on 1 July 1933, the first London to Rangoon service on 23 September 1933, the first London to Singapore service on 9 December 1933, and the first London to Brisbane service on 8 December 1934, with Qantas responsible for the Singapore to Brisbane sector. (The 1934 start was for mail; passenger flights to Brisbane began the following April.) The first London to Hong Kong passengers departed London on 14 March 1936 following the establishment of a branch from Penang to Hong Kong.", "title": "Empire services" }, { "paragraph_id": 13, "text": "On 28 February 1931 a weekly service began between London and Mwanza on Lake Victoria in Tanganyika as part of the proposed route to Cape Town. On 9 December 1931 the Imperial Airways' service for Central Africa was extended experimentally to Cape Town for the carriage of Christmas mail. The aircraft used on the last sector, DH66 G-AARY City of Karachi arrived in Cape Town on 21 December 1931. On 20 January 1932 a mail-only route to London to Cape Town was opened. On 27 April this route was opened to passengers and took 10 days. In early 1933 Atalantas replaced the DH.66s on the Kisumu to Cape Town sector of the London to Cape Town route. On 9 February 1936 the trans-Africa route was opened by Imperial Airways between Khartoum and Kano in Nigeria. This route was extended to Lagos on 15 October 1936.", "title": "Empire services" }, { "paragraph_id": 14, "text": "In 1937 with the introduction of Short Empire flying boats built at Short Brothers, Imperial Airways could offer a through-service from Southampton to the Empire. The journey to the Cape was via Marseille, Rome, Brindisi, Athens, Alexandria, Khartoum, Port Bell, Kisumu and onwards by land-based craft to Nairobi, Mbeya and eventually Cape Town. Survey flights were also made across the Atlantic and to New Zealand. By mid-1937 Imperial had completed its thousandth service to the Empire. Starting in 1938 Empire flying boats also flew between Britain and Australia via India and the Middle East.", "title": "Empire services" }, { "paragraph_id": 15, "text": "In March 1939 three Shorts a week left Southampton for Australia, reaching Sydney after ten days of flying and nine overnight stops. Three more left for South Africa, taking six flying days to Durban.", "title": "Empire services" }, { "paragraph_id": 16, "text": "Imperial's aircraft were small, most seating fewer than twenty passengers; about 50,000 passengers used Imperial Airways in the 1930s. Most passengers on intercontinental routes or on services within and between British colonies were men in colonial administration, business or research. To begin with only the wealthy could afford to fly, but passenger lists gradually diversified. Travel experiences related to flying low and slow, and were reported enthusiastically in newspapers, magazines and books. There was opportunity for sightseeing from the air and at stops.", "title": "Empire services" }, { "paragraph_id": 17, "text": "Imperial Airways stationed its all-male flight deck crew, cabin crew and ground crew along the length of its routes. Specialist engineers and inspectors – and ground crew on rotation or leave – travelled on the airline without generating any seat revenue. Several air crew lost their lives in accidents. At the end of the 1930s crew numbers approximated 3,000. All crew were expected to be ambassadors for Britain and the British Empire.", "title": "Empire services" }, { "paragraph_id": 18, "text": "In 1934 the government began negotiations with Imperial Airways to establish a service (Empire Air Mail Scheme) to carry mail by air on routes served by the airline. Indirectly these negotiations led to the dismissal in 1936 of Sir Christopher Bullock, the Permanent Under-Secretary at the Air Ministry, who was found by a Board of Inquiry to have abused his position in seeking a position on the board of the company while these negotiations were in train. The government, including the Prime Minister, regretted the decision to dismiss him, later finding that, in fact, no corruption was alleged and sought Bullock's reinstatement which he declined.", "title": "Empire services" }, { "paragraph_id": 19, "text": "The Empire Air Mail Programme started in July 1937, delivering anywhere for 11/2 d./oz. By mid-1938 a hundred tons of mail had been delivered to India and a similar amount to Africa. In the same year, construction was started on the Empire Terminal in Victoria, London, designed by A. Lakeman and with a statue by Eric Broadbent, Speed Wings Over the World gracing the portal above the main entrance. From the terminal there were train connections to Imperial's flying boats at Southampton and coaches to its landplane base at Croydon Airport. The terminal operated as recently as 1980.", "title": "Empire services" }, { "paragraph_id": 20, "text": "To help promote use of the Air Mail service, in June and July 1939, Imperial Airways participated with Pan American Airways in providing a special \"around the world\" service; Imperial carried the souvenir mail from Foynes, Ireland, to Hong Kong, out of the eastbound New York to New York route. Pan American provided service from New York to Foynes (departing 24 June, via the first flight of Northern FAM 18) and Hong Kong to San Francisco (via FAM 14), and United Airlines carried it on the final leg from San Francisco to New York, arriving on 28 July.", "title": "Empire services" }, { "paragraph_id": 21, "text": "Captain H. W. C. Alger was the pilot for the inaugural air mail flight carrying mail from England to Australia for the first time on the Short Empire flyingboat Castor for Imperial Airways' Empires Air Routes, in 1937.", "title": "Empire services" }, { "paragraph_id": 22, "text": "In November 2016, 80 years later, the Crete2Cape Vintage Air Rally flew this old route with fifteen vintage aeroplanes – a celebration of the skill and determination of these early aviators.", "title": "Empire services" }, { "paragraph_id": 23, "text": "Before the outbreak of war on 1 September 1939, the British government had already implemented the Air Navigation (Restriction in Time of War) Order 1939. That ordered military takeover of most civilian airfields in the UK, cessation of all private flying without individual flight permits, and other emergency measures. It was administered by a statutory department of the Air Ministry titled National Air Communications (NAC). By 1 September 1939, the aircraft and administrations of Imperial Airways and British Airways Ltd were physically transferred to Bristol (Whitchurch) Airport, to be operated jointly by NAC. On 1 April 1940, Imperial Airways Ltd and British Airways Ltd were officially combined into a new company, British Overseas Airways Corporation (BOAC), that had already been formed on 24 November 1939 with retrospective financial arrangements.", "title": "Second World War" }, { "paragraph_id": 24, "text": "Imperial Airways operated many types of aircraft from its formation on 1 April 1924 until 1 April 1940 when all aircraft still in service were transferred to BOAC.", "title": "Aircraft" } ]
Imperial Airways was an early British commercial long-range airline, operating from 1924 to 1939 and principally serving the British Empire routes to South Africa, India, Australia and the Far East, including Malaya and Hong Kong. Passengers were typically businessmen or colonial administrators, and most flights carried about 20 passengers or fewer. Accidents were frequent: in the first six years, 32 people died in seven incidents. Imperial Airways never achieved the levels of technological innovation of its competitors and was merged into the British Overseas Airways Corporation (BOAC) in 1939. BOAC in turn merged with the British European Airways (BEA) in 1974 to form British Airways.
2002-02-25T15:51:15Z
2023-09-26T22:10:03Z
[ "Template:Distinguish", "Template:Convert", "Template:Cite magazine", "Template:Cite newspaper The Times", "Template:Use British English", "Template:Infobox company", "Template:Sfrac", "Template:Expand list", "Template:Reflist", "Template:Cite book", "Template:Citation", "Template:PM20", "Template:Use dmy dates", "Template:ASN accident", "Template:British Airways", "Template:Short description", "Template:Citation needed", "Template:Nowrap", "Template:Cite web", "Template:Harvp", "Template:Cite journal", "Template:ISBN", "Template:Commons category" ]
https://en.wikipedia.org/wiki/Imperial_Airways
15,358
Insanity defense
The insanity defense, also known as the mental disorder defense, is an affirmative defense by excuse in a criminal case, arguing that the defendant is not responsible for their actions due to a psychiatric disease at the time of the criminal act. This is contrasted with an excuse of provocation, in which the defendant is responsible, but the responsibility is lessened due to a temporary mental state. It is also contrasted with the justification of self defense or with the mitigation of imperfect self-defense. The insanity defense is also contrasted with a finding that a defendant cannot stand trial in a criminal case because a mental disease prevents them from effectively assisting counsel, from a civil finding in trusts and estates where a will is nullified because it was made when a mental disorder prevented a testator from recognizing the natural objects of their bounty, and from involuntary civil commitment to a mental institution, when anyone is found to be gravely disabled or to be a danger to themself or to others. Legal definitions of insanity or mental disorder are varied, and include the M'Naghten Rule, the Durham rule, the 1953 British Royal Commission on Capital Punishment report, the ALI rule (American Legal Institute Model Penal Code rule), and other provisions, often relating to a lack of mens rea ("guilty mind"). In the criminal laws of Australia and Canada, statutory legislation enshrines the M'Naghten Rules, with the terms defense of mental disorder, defense of mental illness or not criminally responsible by reason of mental disorder employed. Being incapable of distinguishing right from wrong is one basis for being found to be legally insane as a criminal defense. It originated in the M'Naghten Rule, and has been reinterpreted and modernized through more recent cases, such as People v. Serravo. In the United Kingdom, Ireland, and the United States, use of the defense is rare. Mitigating factors, including things not eligible for the insanity defense such as intoxication and partial defenses such as diminished capacity and provocation, are used more frequently. The defense is based on evaluations by forensic mental health professionals with the appropriate test according to the jurisdiction. Their testimony guides the jury, but they are not allowed to testify to the accused's criminal responsibility, as this is a matter for the jury to decide. Similarly, mental health practitioners are restrained from making a judgment on the "ultimate issue"—whether the defendant is insane. Some jurisdictions require the evaluation to address the defendant's ability to control their behavior at the time of the offense (the volitional limb). A defendant claiming the defense is pleading "not guilty by reason of insanity" (NGRI) or "guilty but insane or mentally ill" in some jurisdictions which, if successful, may result in the defendant being committed to a psychiatric facility for an indeterminate period. Non compos mentis (Latin) is a legal term meaning "not of sound mind". Non compos mentis derives from the Latin non meaning "not", compos meaning "control" or "command", and mentis (genitive singular of mens), meaning "of mind". It is the direct opposite of Compos mentis (of a sound mind). Although typically used in law, this term can also be used metaphorically or figuratively; e.g. when one is in a confused state, intoxicated, or not of sound mind. The term may be applied when a determination of competency needs to be made by a physician for purposes of obtaining informed consent for treatments and, if necessary, assigning a surrogate to make health care decisions. While the proper sphere for this determination is in a court of law, this is practically, and most frequently, made by physicians in the clinical setting. In English law, the rule of non compos mentis was most commonly used when the defendant invoked religious or magical explanations for behaviour. The concept of defense by insanity has existed since ancient Greece and Rome. However, in colonial America a delusional Dorothy Talbye was hanged in 1638 for murdering her daughter, as at the time Massachusetts's common law made no distinction between insanity (or mental illness) and criminal behavior. Edward II, under English Common law, declared that a person was insane if their mental capacity was no more than that of a "wild beast" (in the sense of a dumb animal, rather than being frenzied). The first complete transcript of an insanity trial dates to 1724. It is likely that the insane, like those under 14, were spared trial by ordeal. When trial by jury replaced this, the jury members were expected to find the insane guilty but then refer the case to the King for a Royal Pardon. From 1500 onwards, juries could acquit the insane, and detention required a separate civil procedure. The Criminal Lunatics Act 1800, passed with retrospective effect following the acquittal of James Hadfield, mandated detention at the regent's pleasure (indefinitely) even for those who, although insane at the time of the offence, were now sane. The M'Naghten Rules of 1843 were not a codification or definition of insanity but rather the responses of a panel of judges to hypothetical questions posed by Parliament in the wake of Daniel M'Naghten's acquittal for the homicide of Edward Drummond, whom he mistook for British Prime Minister Robert Peel. The rules define the defense as "at the time of committing the act the party accused was labouring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing, or as not to know that what he was doing was wrong." The key is that the defendant could not appreciate the nature of their actions during the commission of the crime. In Ford v. Wainwright 477 U.S. 399 (1986), the US Supreme Court upheld the common law rule that the insane cannot be executed. It further stated that a person under the death penalty is entitled to a competency evaluation and to an evidentiary hearing in court on the question of their competency to be executed. In Wainwright v. Greenfield (1986), the Court ruled that it was fundamentally unfair for the prosecutor to comment during the court proceedings on the petitioner's silence invoked as a result of a Miranda warning. The prosecutor had argued that the respondent's silence after receiving Miranda warnings was evidence of his sanity. In 2006, the US Supreme Court decided Clark v. Arizona, upholding Arizona's restrictions on the insanity defense. Kahler v. Kansas, 589 U.S. ___ (2020), is a case of the United States Supreme Court in which the justices ruled that the Eighth and Fourteenth Amendments of the United States Constitution do not require that states adopt the insanity defense in criminal cases that are based on the defendant's ability to recognize right from wrong. The defense of insanity takes different guises in different jurisdictions, and there are differences between legal systems with regard to the availability, definition and burden of proof, as well as the role of judges, juries and medical experts. In jurisdictions where there are jury trials, it is common for the decision about the sanity of an accused to be determined by the jury. An important distinction to be made is the difference between competency and criminal responsibility. Competency largely deals with the defendant's present condition, while criminal responsibility addresses the condition at the time the crime was committed. In the United States, a trial in which the insanity defense is invoked typically involves the testimony of psychiatrists or psychologists who will, as expert witnesses, present opinions on the defendant's state of mind at the time of the offense. Therefore, a person whose mental disorder is not in dispute is determined to be sane if the court decides that despite a "mental illness" the defendant was responsible for the acts committed and will be treated in court as a normal defendant. If the person has a mental illness and it is determined that the mental illness interfered with the person's ability to determine right from wrong (and other associated criteria a jurisdiction may have) and if the person is willing to plead guilty or is proven guilty in a court of law, some jurisdictions have an alternative option known as either a Guilty but Mentally Ill (GBMI) or a Guilty but Insane verdict. The GBMI verdict is available as an alternative to, rather than in lieu of, a "not guilty by reason of insanity" verdict. Michigan (1975) was the first state to create a GBMI verdict, after two prisoners released after being found NGRI committed violent crimes within a year of release, one raping two women and the other killing his wife. The notion of temporary insanity argues that a defendant was insane during the commission of a crime, but they later regained their sanity after the criminal act was carried out. This legal defense developed in the 19th century and became especially associated with the defense of individuals committing crimes of passion. The defense was first successfully used by U.S. Congressman Daniel Sickles of New York in 1859 after he had killed his wife's lover, Philip Barton Key II. The temporary insanity defense was unsuccessfully pleaded by Charles J. Guiteau who assassinated president James A. Garfield in 1881. The United States Supreme Court (in Penry v. Lynaugh) and the United States Court of Appeals for the Fifth Circuit (in Bigby v. Dretke) have been clear in their decisions that jury instructions in death penalty cases that do not ask about mitigating factors regarding the defendant's mental health violate the defendant's Eighth Amendment rights, saying that the jury is to be instructed to consider mitigating factors when answering unrelated questions. This ruling suggests specific explanations to the jury are necessary to weigh mitigating factors. Diminished responsibility or diminished capacity can be employed as a mitigating factor or partial defense to crimes. In the United States, diminished capacity is applicable to more circumstances than the insanity defense. The Homicide Act 1957 is the statutory basis for the defense of diminished responsibility in England and Wales, whereas in Scotland it is a product of case law. The number of findings of diminished responsibility has been matched by a fall in unfitness to plead and insanity findings. A plea of diminished capacity is different from a plea of insanity in that "reason of insanity" is a full defense while "diminished capacity" is merely a plea to a lesser crime. Depending on jurisdiction, circumstances and crime, intoxication may be a defense, a mitigating factor or an aggravating factor. However, most jurisdictions differentiate between voluntary intoxication and involuntary intoxication. In some cases, intoxication (usually involuntary intoxication) may be covered by the insanity defense. Several cases have ruled that persons found not guilty by reason of insanity may not withdraw the defense in a habeas petition to pursue an alternative, although there have been exceptions in other rulings. In Colorado v. Connelly, 700 A.2d 694 (Conn. App. Ct. 1997), the petitioner who had originally been found not guilty by reason of insanity and committed for ten years to the jurisdiction of a Psychiatric Security Review Board, filed a pro se writ of habeas corpus and the court vacated his insanity acquittal. He was granted a new trial and found guilty of the original charges, receiving a prison sentence of 40 years. In the landmark case of Frendak v. United States in 1979, the court ruled that the insanity defense cannot be imposed upon an unwilling defendant if an intelligent defendant voluntarily wishes to forgo the defense. This increased coverage gives the impression that the defense is widely used, but this is not the case. According to an eight-state study, the insanity defense is used in less than 1% of all court cases and, when used, has only a 26% success rate. Of those cases that were successful, 90% of the defendants had been previously diagnosed with mental illness. In the United States, those found to have been not guilty by reason of mental disorder or insanity are generally then required to undergo psychiatric treatment in a mental institution, except in the case of temporary insanity. In England and Wales, under the Criminal Procedure (Insanity and Unfitness to Plead) Act of 1991 (amended by the Domestic Violence, Crime and Victims Act, 2004 to remove the option of a guardianship order), the court can mandate a hospital order, a restriction order (where release from hospital requires the permission of the Home Secretary), a "supervision and treatment" order, or an absolute discharge. Unlike defendants who are found guilty of a crime, they are not institutionalized for a fixed period, but rather held in the institution until they are determined not to be a threat. Authorities making this decision tend to be cautious, and as a result, defendants can often be institutionalized for longer than they would have been incarcerated in prison. In Australia there are nine law units, each of which may have different rules governing mental impairment defenses. In South Australia, the Criminal Law Consolidation Act 1935 (SA) provides that: 269C—Mental competence A person is mentally incompetent to commit an offence if, at the time of the conduct alleged to give rise to the offence, the person is suffering from a mental impairment and, in consequence of the mental impairment— 269H — Mental unfitness to stand trial A person is mentally unfit to stand trial on a charge of an offence if the person's mental processes are so disordered or impaired that the person is — In Victoria the current defence of mental impairment was introduced in the Crimes (Mental Impairment and Unfitness to be Tried) Act 1997 which replaced the common law defence of insanity and indefinite detention at the governor's pleasure with the following: These requirements are almost identical to the M'Naghten Rules, substituting "mental impairment" for "disease of the mind". In New South Wales, the defence has been renamed the 'Defence of Mental Illness' in Part 4 of the Mental Health (Forensic Provisions) Act 1990. However, definitions of the defence are derived from M'Naghten's case and have not been codified. Whether a particular condition amounts to a disease of the mind is not a medical but a legal question to be decided in accordance with the ordinary rules of interpretation. This defence is an exception to the Woolmington v DPP (1935) 'golden thread', as the party raising the issue of the defence of mental illness bears the burden of proving this defence on the balance of probabilities. Generally, the defence will raise the issue of insanity. However, the prosecution can raise it in exceptional circumstances: R v Ayoub (1984). Australian cases have further qualified and explained the M'Naghten Rules. The NSW Supreme Court has held there are two limbs to the M'Naghten Rules, that the accused did not know what he was doing, or that the accused did not appreciate that what he was doing was morally wrong, in both cases the accused must be operating under a 'defect of reason, from a disease of the mind'. The High Court in R v Porter stated that the condition of the accused's mind is relevant only at the time of the actus reus. In Woodbridge v The Queen the court stated that a symptom indicating a disease of the mind must be prone to recur and be the result of an underlying pathological infirmity. A 'defect of reason' is the inability to think rationally and pertains to incapacity to reason, rather than having unsound ideas or difficulty with such a task. Examples of disease of the mind include Arteriosclerosis (considered so because the hardening of the arteries affects the mind. The defence of mental disorder is codified in section 16 of the Criminal Code which states, in part: To establish a claim of mental disorder the party raising the issue must show on a balance of probabilities first that the person who committed the act was suffering from a "disease of the mind", and second, that at the time of the offence they were either 1) unable to appreciate the "nature and quality" of the act, or 2) did not know it was "wrong". The meaning of the word "wrong" was determined in the Supreme Court case of R. v. Chaulk [1990] 3 S.C.R. which held that "wrong" was NOT restricted to "legally wrong" but to "morally wrong" as well. The current legislative scheme was created by the Parliament of Canada after the previous scheme was found unconstitutional by the Supreme Court of Canada in R. v. Swain. The new provisions also replaced the old insanity defense with the current mental disorder defence. Once a person is found not criminally responsible ("NCR"), they will have a hearing by a Review Board within 45 days (90 days if the court extends the delay). A Review Board is established under Part XX.1 of the Criminal Code and is composed of at least three members, a person who is a judge or eligible to be a judge, a psychiatrist and another expert in a relevant field, such as social work, criminology or psychology. Parties at a Review Board hearing are usually the accused, the Crown and the hospital responsible for the supervision or assessment of the accused. A Review Board is responsible for both accused persons found NCR or accused persons found unfit to stand trial on account of mental disorder. A Review Board dealing with an NCR offender must consider two questions: whether the accused is a "significant threat to the safety of the public" and, if so, what the "least onerous and least restrictive" restrictions on the liberty of the accused should be in order to mitigate such a threat. Proceedings before a Review Board are inquisitorial rather than adversarial. Often the Review Board will be active in conducting an inquiry. Where the Review Board is unable to conclude that the accused is a significant threat to the safety of the public, the review board must grant the accused an absolute discharge, an order essentially terminating the jurisdiction of the criminal law over the accused. Otherwise, the Review Board must order that the accused be either discharged subject to conditions or detained in a hospital, both subject to conditions. The conditions imposed must be the least onerous and least restrictive necessary to mitigate any danger the accused may pose to others. Since the Review Board is empowered under criminal law powers under s. 91(27) of the Constitution Act, 1867 the sole justification for its jurisdiction is public safety. Therefore, the nature of the inquiry is the danger the accused may pose to public safety rather than whether the accused is "cured". For instance, many "sick" accused persons are discharged absolutely on the basis that they are not a danger to the public while many "sane" accused are detained on the basis that they are dangerous. Moreover, the notion of "significant threat to the safety of the public" is a "criminal threat". This means that the Review Board must find that the threat posed by the accused is of a criminal nature. While proceedings before a Review Board are less formal than in court, there are many procedural safeguards available to the accused given the potential indefinite nature of Part XX.1. Any party may appeal against the decision of a Review Board. In 1992 when the new mental disorder provisions were enacted, Parliament included "capping" provisions which were to be enacted at a later date. These capping provisions limited the jurisdiction of a Review Board over an accused based on the maximum potential sentence had the accused been convicted (e.g. there would be a cap of 5 years if the maximum penalty for the index offence is 5 years). However, these provisions were never proclaimed into force and were subsequently repealed. A Review Board must hold a hearing every 12 months (unless extended to 24 months) until the accused is discharged absolutely. The issue of mental disorder may also come into play before a trial even begins if the accused's mental state prevents the accused from being able to appreciate the nature of a trial and to conduct a defence. An accused who is found to be unfit to stand trial is subject to the jurisdiction a Review Board. While the considerations are essentially the same, there are a few provisions which apply only to unfit accused. A Review Board must determine whether the accused is fit to stand trial. Regardless of the determination, the Review Board must then determine what conditions should be imposed on the accused, considering both the protection of the public and the maintenance of the fitness of the accused (or conditions which would render the accused fit). Previously an absolute discharge was unavailable to an unfit accused. However, in R. v. Demers, the Supreme Court of Canada struck down the provision restricting the availability of an absolute discharge to an accused person who is deemed both "permanently unfit" and not a significant threat to the safety of the public. Presently a Review Board may recommend a judicial stay of proceedings in the event that it finds the accused both "permanently unfit" and non-dangerous. The decision is left to the court having jurisdiction over the accused. An additional requirement for an unfit accused is the holding of a "prima facie case" hearing every two years. The Crown must demonstrate to the court having jurisdiction over the accused that it still has sufficient evidence to try the accused. If the Crown fails to meet this burden then the accused is discharged and proceedings are terminated. The nature of the hearing is virtually identical to that of a preliminary hearing. In Denmark a psychotic person who commits a criminal defense is declared guilty but is sentenced to mandatory treatment instead of prison. Section 16 of the penal code states that "Persons, who, at the time of the act, were irresponsible owing to mental illness or similar conditions or to a pronounced mental deficiency, are not punishable". This means that in Denmark, 'insanity' is a legal term rather than a medical term and that the court retains the authority to decide whether an accused person is irresponsible. In Finland, punishments can only be administered if the accused is compos mentis, of sound mind; not if the accused is insane (syyntakeeton, literally "unable to guarantee [shoulder the responsibility of] guilt"). Thus, an insane defendant may be found guilty based on the facts and their actions just as a sane defendant, but the insanity will only affect the punishment. The definition of insanity is similar to the M'Naught criterion above: "the accused is insane, if during the act, due to a mental illness, profound mental retardation or a severe disruption of mental health or consciousness, he cannot understand the actual nature of his act or its illegality, or that his ability to control his behavior is critically weakened". If an accused is suspected to be insane, the court must consult the National Institute for Health and Welfare (THL), which is obliged to place the accused in involuntary commitment if they are found insane. The offender receives no judicial punishment; they become a patient under the jurisdiction of THL, and must be released immediately once the conditions of involuntary commitment are no longer fulfilled. Diminished responsibility is also available, resulting in lighter sentences. According to section 20 of the German criminal code, those who commit an illegal act because a mental disorder makes them unable to see the wrong of the act or to act on this insight is considered not guilty. Section 63 stipulates that if the offender is deemed at risk of committing further offences that will harm others or cause grave economic damage, and if they therefore pose a continuing threat to public safety, they shall be committed to a psychiatric hospital in lieu of a custodial or suspended prison sentence. If the ability to recognize the right or wrong of action or the ability to act accordingly is lost due to a mental disorder, then the defendant cannot be pursued under Japanese criminal law so if this is recognized during a trial then an innocent judgment will be given. This is, however, rare, happening in only around 1 in 500,000 cases. Section 39 of the Dutch criminal code stipulates: "Not culpable is he who performs an act that he cannot be imputed with due to the deficient development or pathological disorder of his mental faculties". Obviously critical are the definitions of "deficient development" and/or "pathological [mental] disorder". These are to be verified by somatomedical and/or psychiatric specialists. An inculpability defense needs to conform to the following criteria: If the inculpability defense succeeds, the defendant cannot be ordered to incarceration proper. If the defendant is deemed to be criminally insane (i.e. deemed to pose a risk to himself or others), the court instead may order involuntary admission to a mental institution for further evaluation and/or treatment. The court can opt for a definite period of time (when complete or at least sufficient recovery of mental faculties on a relatively short time scale is probable) or an indefinite period of time (when the defendant's ailment is deemed to be difficult or impossible to treat, or can be supposed to be refractory to treatment). If the inculpability defense succeeds only partly ([i.e. if the crime cannot be completely excused because of a minor degree of deficient development or pathological (mental) disorder), there may still be a legal basis for a diminished culpability of the defendant; in such case, a diminished prison sentence should be ordered. This can also be combined with the aforementioned involuntary admission to a mental institution, although in these cases the two 'sentences' often run/are served in parallel. In Norway, psychotic perpetrators are declared guilty but not punished and, instead of prison, they are sentenced to mandatory treatment. Section 44 of the penal code states specifically that "a person who at the time of the crime was insane or unconscious is not punished". It is the responsibility of a criminal court to consider whether the accused may have been psychotic or suffering from other severe mental defects when perpetrating a criminal act. Thus, even though he himself declared to be sane, the court hearing the case of Anders Behring Breivik considered the question of his sanity. Insanity is determined through a judicial decision issued on the basis of expert opinions of psychiatrists and psychologists. A forensic psychiatric examination is used to establish insanity. The result of the forensic examination is then subjected to a legal assessment, taking into account other circumstances of the case, from which a conclusion is drawn about the defendant's sanity or insanity. The Criminal Code of Russia establishes that a person who during the commission of an illegal act was in a state of insanity, that is, could not be aware of the actual nature and social danger of their actions or was unable to control them due to a chronic mental disorder, a temporary mental disorder, or dementia is not subject to criminal liability. In Sweden, psychotic perpetrators are seen as accountable, but the sanction is, if they are psychotic at the time of the trial, forensic mental care. Although use of the insanity defense is rare, since the Criminal Procedure (Insanity and Unfitness to Plead) Act 1991, insanity pleas have steadily increased in the UK. The Scottish Law Commission, in its Discussion Paper No 122 on Insanity and Diminished Responsibility (2003) confirms that the law has not substantially changed from the position stated in Hume's Commentaries in 1797: We may next attend to the case of those unfortunate persons, who have plead the miserable defense of idiocy or insanity. Which condition, if it is not an assumed or imperfect, but a genuine and thorough insanity, and is proved by the testimony of intelligent witnesses, makes the act like that of an infant, and equally bestows the privilege of an entire exemption from any manner of pain; Cum alterum innocentia concilii tuetur, alterum fati infelicitas excusat. I say, where the insanity is absolute, and is duly proved: For if reason and humanity enforce the plea in these circumstances, it is no less necessary to observe a caution and reserve in applying the law, as shall hinder it from being understood, that there is any privilege in a case of mere weakness of intellect, or a strange and moody humor, or a crazy and capricious or irritable temper. In none of these situations does or can the law excuse the offender. Because such constitutions are not exclusive of a competent understanding of the true state of the circumstances in which the deed is done, nor of the subsistence of some steady and evil passion, grounded in those circumstances, and directed to a certain object. To serve the purpose of a defense in law, the disorder must therefore amount to an absolute alienation of reason, ut continua mentis alienatione, omni intellectu careat – such a disease as deprives the patient of the knowledge of the true aspect and position of things about them - hinders them from distinguishing friend from foe – and gives them up to the impulse of their own distempered fancy. The phrase "absolute alienation of reason" is still regarded as at the core of the defense in the modern law (see HM Advocate v Kidd (1960) JC 61 and Brennan v HM Advocate (1977) In the United States, variances in the insanity defense between states, and in the federal court system, are attributable to differences with respect to three key issues: In Foucha v. Louisiana (1992) the Supreme Court of the United States ruled that a person could not be held "indefinitely" for psychiatric treatment following a finding of not guilty by reason of insanity. In the United States, a criminal defendant may plead insanity in federal court, and in the state courts of every state except for Idaho, Kansas, Montana, and Utah. However, defendants in states that disallow the insanity defense may still be able to demonstrate that a defendant was not capable of forming intent to commit a crime as a result of mental illness. In Kahler v. Kansas (2020), the U.S. Supreme Court held, in a 6–3 ruling, that a state does not violate the Due Process Clause by abolishing an insanity defense based on a defendant's incapacity to distinguish right from wrong. The Court emphasized that state governments have broad discretion to choose laws defining "the precise relationship between criminal culpability and mental illness." Each state and the federal court system currently uses one of the following "tests" to define insanity for purposes of the insanity defense. Over its decades of use the definition of insanity has been modified by statute, with changes to the availability of the insanity defense, what constitutes legal insanity, whether the prosecutor or defendant has the burden of proof, the standard of proof required at trial, trial procedures, and to commitment and release procedures for defendants who have been acquitted based on a finding of insanity. The guidelines for the M'Naghten Rules, state, among other things, and evaluating the criminal responsibility for defendants claiming to be insane were settled in the British courts in the case of Daniel M'Naghten in 1843. M'Naghten was a Scottish woodcutter who killed the secretary to the prime minister, Edward Drummond, in a botched attempt to assassinate the prime minister himself. M'Naghten apparently believed that the prime minister was the architect of the myriad of personal and financial misfortunes that had befallen him. During his trial, nine witnesses testified to the fact that he was insane, and the jury acquitted him, finding him "not guilty by reason of insanity". The House of Lords asked the judges of the common law courts to answer five questions on insanity as a criminal defence, and the formulation that emerged from their review—that a defendant should not be held responsible for their actions only if, as a result of their mental disease or defect, they (i) did not know that their act would be wrong; or (ii) did not understand the nature and quality of their actions—became the basis of the law governing legal responsibility in cases of insanity in England. Under the rules, loss of control because of mental illness was no defense. The M'Naghten rule was embraced with almost no modification by American courts and legislatures for more than 100 years, until the mid-20th century. The strict M'Naghten standard for the insanity defense was widely used until the 1950s and the case of Durham v. United States case. In the Durham case, the court ruled that a defendant is entitled to acquittal if the crime was the product of their mental illness (i.e., crime would not have been committed but for the disease). The test, also called the Product Test, is broader than either the M'Naghten test or the irresistible impulse test. The test has more lenient guidelines for the insanity defense, but it addressed the issue of convicting mentally ill defendants, which was allowed under the M'Naghten Rule. However, the Durham standard drew much criticism because of its expansive definition of legal insanity. The Model Penal Code, published by the American Law Institute, provides a standard for legal insanity that serves as a compromise between the strict M'Naghten Rule, the lenient Durham ruling, and the irresistible impulse test. Under the MPC standard, which represents the modern trend, a defendant is not responsible for criminal conduct "if at the time of such conduct as a result of mental disease or defect he lacks substantial capacity either to appreciate the criminality of their conduct or to conform their conduct to the requirements of the law." The test thus takes into account both the cognitive and volitional capacity of insanity. After the perpetrator of President Reagan's assassination attempt was found not guilty by reason of insanity, Congress passed the Insanity Defense Reform Act of 1984. Under this act, the burden of proof was shifted from the prosecution to the defense and the standard of evidence in federal trials was increased from a preponderance of evidence to clear and convincing evidence. The ALI test was discarded in favor of a new test that more closely resembled M'Naghten's. Under this new test only perpetrators suffering from severe mental illnesses at the time of the crime could successfully employ the insanity defense. The defendant's ability to control himself or herself was no longer a consideration. The Act also curbed the scope of expert psychiatric testimony and adopted stricter procedures regarding the hospitalization and release of those found not guilty by reason of insanity. Those acquitted of a federal offense by reason of insanity have not been able to challenge their psychiatric confinement through a writ of habeas corpus or other remedies. In Archuleta v. Hedrick, 365 F.3d 644 (8th Cir. 2004), the U.S. Court of Appeals for the Eighth Circuit the court ruled persons found not guilty by reason of insanity and later want to challenge their confinement may not attack their initial successful insanity defense: The appellate court affirmed the lower court's judgment: "Having thus elected to make himself a member of that 'exceptional class' of persons who seek verdicts of not guilty by reason of insanity...he cannot now be heard to complain of the statutory consequences of his election." The court held that no direct attack upon the final judgment of acquittal by reason of insanity was possible. It also held that the collateral attack that he was not informed that a possible alternative to his commitment was to ask for a new trial was not a meaningful alternative. As an alternative to the insanity defense, some jurisdictions permit a defendant to plead guilty but mentally ill. A defendant who is found guilty but mentally ill may be sentenced to mental health treatment, at the conclusion of which the defendant will serve the remainder of their sentence in the same manner as any other defendant. In a majority of states, the burden of proving insanity is placed on the defendant, who must prove insanity by a preponderance of the evidence. In a minority of states, the burden is placed on the prosecution, who must prove sanity beyond reasonable doubt. In federal court the burden is placed on the defendant, who must prove insanity by clear and convincing evidence. See 18 U.S.C.S. Sec. 17(b); see also A.R.S. Sec. 13-502(C). The insanity plea is used in the U.S Criminal Justice System in less than 1% of all criminal cases. Little is known about the criminal justice system and the mentally ill: [T]here is no definitive study regarding the percentage of people with mental illness who come into contact with police, appear as criminal defendants, are incarcerated, or are under community supervision. Furthermore, the scope of this issue varies across jurisdictions. Accordingly, advocates should rely as much as possible on statistics collected by local and state government agencies. Some U.S. states have begun to ban the use of the insanity defense, and in 1994 the Supreme Court denied a petition of certiorari seeking review of a Montana Supreme Court case that upheld Montana's abolition of the defense. Idaho, Kansas, and Utah have also banned the defense. However, a mentally ill defendant/patient can be found unfit to stand trial in these states. In 2001, the Nevada Supreme Court found that their state's abolition of the defense was unconstitutional as a violation of Federal due process. In 2006, the Supreme Court decided Clark v. Arizona upholding Arizona's limitations on the insanity defense. In that same ruling, the Court noted "We have never held that the Constitution mandates an insanity defense, nor have we held that the Constitution does not so require." In 2020, the Supreme Court decided Kahler v. Kansas upholding Kansas' abolition of the insanity defense, stating that the Constitution does not require Kansas to adopt an insanity test that turns on a defendant's ability to recognize that their crime was morally wrong. The insanity defense is also complicated because of the underlying differences in philosophy between psychiatrists/psychologists and legal professionals. In the United States, a psychiatrist, psychologist or other mental health professional is often consulted as an expert witness in insanity cases, but the ultimate legal judgment of the defendant's sanity is determined by a jury, not by a mental health professional. In other words, mental health professionals provide testimony and professional opinion but are not ultimately responsible for answering legal questions.
[ { "paragraph_id": 0, "text": "The insanity defense, also known as the mental disorder defense, is an affirmative defense by excuse in a criminal case, arguing that the defendant is not responsible for their actions due to a psychiatric disease at the time of the criminal act. This is contrasted with an excuse of provocation, in which the defendant is responsible, but the responsibility is lessened due to a temporary mental state. It is also contrasted with the justification of self defense or with the mitigation of imperfect self-defense. The insanity defense is also contrasted with a finding that a defendant cannot stand trial in a criminal case because a mental disease prevents them from effectively assisting counsel, from a civil finding in trusts and estates where a will is nullified because it was made when a mental disorder prevented a testator from recognizing the natural objects of their bounty, and from involuntary civil commitment to a mental institution, when anyone is found to be gravely disabled or to be a danger to themself or to others.", "title": "" }, { "paragraph_id": 1, "text": "Legal definitions of insanity or mental disorder are varied, and include the M'Naghten Rule, the Durham rule, the 1953 British Royal Commission on Capital Punishment report, the ALI rule (American Legal Institute Model Penal Code rule), and other provisions, often relating to a lack of mens rea (\"guilty mind\"). In the criminal laws of Australia and Canada, statutory legislation enshrines the M'Naghten Rules, with the terms defense of mental disorder, defense of mental illness or not criminally responsible by reason of mental disorder employed. Being incapable of distinguishing right from wrong is one basis for being found to be legally insane as a criminal defense. It originated in the M'Naghten Rule, and has been reinterpreted and modernized through more recent cases, such as People v. Serravo.", "title": "" }, { "paragraph_id": 2, "text": "In the United Kingdom, Ireland, and the United States, use of the defense is rare. Mitigating factors, including things not eligible for the insanity defense such as intoxication and partial defenses such as diminished capacity and provocation, are used more frequently.", "title": "" }, { "paragraph_id": 3, "text": "The defense is based on evaluations by forensic mental health professionals with the appropriate test according to the jurisdiction. Their testimony guides the jury, but they are not allowed to testify to the accused's criminal responsibility, as this is a matter for the jury to decide. Similarly, mental health practitioners are restrained from making a judgment on the \"ultimate issue\"—whether the defendant is insane.", "title": "" }, { "paragraph_id": 4, "text": "Some jurisdictions require the evaluation to address the defendant's ability to control their behavior at the time of the offense (the volitional limb). A defendant claiming the defense is pleading \"not guilty by reason of insanity\" (NGRI) or \"guilty but insane or mentally ill\" in some jurisdictions which, if successful, may result in the defendant being committed to a psychiatric facility for an indeterminate period.", "title": "" }, { "paragraph_id": 5, "text": "Non compos mentis (Latin) is a legal term meaning \"not of sound mind\". Non compos mentis derives from the Latin non meaning \"not\", compos meaning \"control\" or \"command\", and mentis (genitive singular of mens), meaning \"of mind\". It is the direct opposite of Compos mentis (of a sound mind).", "title": "Non compos mentis" }, { "paragraph_id": 6, "text": "Although typically used in law, this term can also be used metaphorically or figuratively; e.g. when one is in a confused state, intoxicated, or not of sound mind. The term may be applied when a determination of competency needs to be made by a physician for purposes of obtaining informed consent for treatments and, if necessary, assigning a surrogate to make health care decisions. While the proper sphere for this determination is in a court of law, this is practically, and most frequently, made by physicians in the clinical setting.", "title": "Non compos mentis" }, { "paragraph_id": 7, "text": "In English law, the rule of non compos mentis was most commonly used when the defendant invoked religious or magical explanations for behaviour.", "title": "Non compos mentis" }, { "paragraph_id": 8, "text": "The concept of defense by insanity has existed since ancient Greece and Rome. However, in colonial America a delusional Dorothy Talbye was hanged in 1638 for murdering her daughter, as at the time Massachusetts's common law made no distinction between insanity (or mental illness) and criminal behavior. Edward II, under English Common law, declared that a person was insane if their mental capacity was no more than that of a \"wild beast\" (in the sense of a dumb animal, rather than being frenzied). The first complete transcript of an insanity trial dates to 1724. It is likely that the insane, like those under 14, were spared trial by ordeal. When trial by jury replaced this, the jury members were expected to find the insane guilty but then refer the case to the King for a Royal Pardon. From 1500 onwards, juries could acquit the insane, and detention required a separate civil procedure. The Criminal Lunatics Act 1800, passed with retrospective effect following the acquittal of James Hadfield, mandated detention at the regent's pleasure (indefinitely) even for those who, although insane at the time of the offence, were now sane.", "title": "History" }, { "paragraph_id": 9, "text": "The M'Naghten Rules of 1843 were not a codification or definition of insanity but rather the responses of a panel of judges to hypothetical questions posed by Parliament in the wake of Daniel M'Naghten's acquittal for the homicide of Edward Drummond, whom he mistook for British Prime Minister Robert Peel. The rules define the defense as \"at the time of committing the act the party accused was labouring under such a defect of reason, from disease of the mind, as not to know the nature and quality of the act he was doing, or as not to know that what he was doing was wrong.\" The key is that the defendant could not appreciate the nature of their actions during the commission of the crime.", "title": "History" }, { "paragraph_id": 10, "text": "In Ford v. Wainwright 477 U.S. 399 (1986), the US Supreme Court upheld the common law rule that the insane cannot be executed. It further stated that a person under the death penalty is entitled to a competency evaluation and to an evidentiary hearing in court on the question of their competency to be executed. In Wainwright v. Greenfield (1986), the Court ruled that it was fundamentally unfair for the prosecutor to comment during the court proceedings on the petitioner's silence invoked as a result of a Miranda warning. The prosecutor had argued that the respondent's silence after receiving Miranda warnings was evidence of his sanity.", "title": "History" }, { "paragraph_id": 11, "text": "In 2006, the US Supreme Court decided Clark v. Arizona, upholding Arizona's restrictions on the insanity defense.", "title": "History" }, { "paragraph_id": 12, "text": "Kahler v. Kansas, 589 U.S. ___ (2020), is a case of the United States Supreme Court in which the justices ruled that the Eighth and Fourteenth Amendments of the United States Constitution do not require that states adopt the insanity defense in criminal cases that are based on the defendant's ability to recognize right from wrong.", "title": "History" }, { "paragraph_id": 13, "text": "The defense of insanity takes different guises in different jurisdictions, and there are differences between legal systems with regard to the availability, definition and burden of proof, as well as the role of judges, juries and medical experts. In jurisdictions where there are jury trials, it is common for the decision about the sanity of an accused to be determined by the jury.", "title": "Application" }, { "paragraph_id": 14, "text": "An important distinction to be made is the difference between competency and criminal responsibility.", "title": "Application" }, { "paragraph_id": 15, "text": "Competency largely deals with the defendant's present condition, while criminal responsibility addresses the condition at the time the crime was committed.", "title": "Application" }, { "paragraph_id": 16, "text": "In the United States, a trial in which the insanity defense is invoked typically involves the testimony of psychiatrists or psychologists who will, as expert witnesses, present opinions on the defendant's state of mind at the time of the offense.", "title": "Application" }, { "paragraph_id": 17, "text": "Therefore, a person whose mental disorder is not in dispute is determined to be sane if the court decides that despite a \"mental illness\" the defendant was responsible for the acts committed and will be treated in court as a normal defendant. If the person has a mental illness and it is determined that the mental illness interfered with the person's ability to determine right from wrong (and other associated criteria a jurisdiction may have) and if the person is willing to plead guilty or is proven guilty in a court of law, some jurisdictions have an alternative option known as either a Guilty but Mentally Ill (GBMI) or a Guilty but Insane verdict. The GBMI verdict is available as an alternative to, rather than in lieu of, a \"not guilty by reason of insanity\" verdict. Michigan (1975) was the first state to create a GBMI verdict, after two prisoners released after being found NGRI committed violent crimes within a year of release, one raping two women and the other killing his wife.", "title": "Application" }, { "paragraph_id": 18, "text": "The notion of temporary insanity argues that a defendant was insane during the commission of a crime, but they later regained their sanity after the criminal act was carried out. This legal defense developed in the 19th century and became especially associated with the defense of individuals committing crimes of passion. The defense was first successfully used by U.S. Congressman Daniel Sickles of New York in 1859 after he had killed his wife's lover, Philip Barton Key II. The temporary insanity defense was unsuccessfully pleaded by Charles J. Guiteau who assassinated president James A. Garfield in 1881.", "title": "Application" }, { "paragraph_id": 19, "text": "The United States Supreme Court (in Penry v. Lynaugh) and the United States Court of Appeals for the Fifth Circuit (in Bigby v. Dretke) have been clear in their decisions that jury instructions in death penalty cases that do not ask about mitigating factors regarding the defendant's mental health violate the defendant's Eighth Amendment rights, saying that the jury is to be instructed to consider mitigating factors when answering unrelated questions. This ruling suggests specific explanations to the jury are necessary to weigh mitigating factors.", "title": "Application" }, { "paragraph_id": 20, "text": "Diminished responsibility or diminished capacity can be employed as a mitigating factor or partial defense to crimes. In the United States, diminished capacity is applicable to more circumstances than the insanity defense. The Homicide Act 1957 is the statutory basis for the defense of diminished responsibility in England and Wales, whereas in Scotland it is a product of case law. The number of findings of diminished responsibility has been matched by a fall in unfitness to plead and insanity findings. A plea of diminished capacity is different from a plea of insanity in that \"reason of insanity\" is a full defense while \"diminished capacity\" is merely a plea to a lesser crime.", "title": "Application" }, { "paragraph_id": 21, "text": "Depending on jurisdiction, circumstances and crime, intoxication may be a defense, a mitigating factor or an aggravating factor. However, most jurisdictions differentiate between voluntary intoxication and involuntary intoxication. In some cases, intoxication (usually involuntary intoxication) may be covered by the insanity defense.", "title": "Application" }, { "paragraph_id": 22, "text": "Several cases have ruled that persons found not guilty by reason of insanity may not withdraw the defense in a habeas petition to pursue an alternative, although there have been exceptions in other rulings. In Colorado v. Connelly, 700 A.2d 694 (Conn. App. Ct. 1997), the petitioner who had originally been found not guilty by reason of insanity and committed for ten years to the jurisdiction of a Psychiatric Security Review Board, filed a pro se writ of habeas corpus and the court vacated his insanity acquittal. He was granted a new trial and found guilty of the original charges, receiving a prison sentence of 40 years.", "title": "Application" }, { "paragraph_id": 23, "text": "In the landmark case of Frendak v. United States in 1979, the court ruled that the insanity defense cannot be imposed upon an unwilling defendant if an intelligent defendant voluntarily wishes to forgo the defense.", "title": "Application" }, { "paragraph_id": 24, "text": "This increased coverage gives the impression that the defense is widely used, but this is not the case. According to an eight-state study, the insanity defense is used in less than 1% of all court cases and, when used, has only a 26% success rate. Of those cases that were successful, 90% of the defendants had been previously diagnosed with mental illness.", "title": "Usage" }, { "paragraph_id": 25, "text": "In the United States, those found to have been not guilty by reason of mental disorder or insanity are generally then required to undergo psychiatric treatment in a mental institution, except in the case of temporary insanity.", "title": "Psychiatric treatment" }, { "paragraph_id": 26, "text": "In England and Wales, under the Criminal Procedure (Insanity and Unfitness to Plead) Act of 1991 (amended by the Domestic Violence, Crime and Victims Act, 2004 to remove the option of a guardianship order), the court can mandate a hospital order, a restriction order (where release from hospital requires the permission of the Home Secretary), a \"supervision and treatment\" order, or an absolute discharge. Unlike defendants who are found guilty of a crime, they are not institutionalized for a fixed period, but rather held in the institution until they are determined not to be a threat. Authorities making this decision tend to be cautious, and as a result, defendants can often be institutionalized for longer than they would have been incarcerated in prison.", "title": "Psychiatric treatment" }, { "paragraph_id": 27, "text": "In Australia there are nine law units, each of which may have different rules governing mental impairment defenses.", "title": "Worldwide" }, { "paragraph_id": 28, "text": "In South Australia, the Criminal Law Consolidation Act 1935 (SA) provides that: 269C—Mental competence", "title": "Worldwide" }, { "paragraph_id": 29, "text": "A person is mentally incompetent to commit an offence if, at the time of the conduct alleged to give rise to the offence, the person is suffering from a mental impairment and, in consequence of the mental impairment—", "title": "Worldwide" }, { "paragraph_id": 30, "text": "269H — Mental unfitness to stand trial", "title": "Worldwide" }, { "paragraph_id": 31, "text": "A person is mentally unfit to stand trial on a charge of an offence if the person's mental processes are so disordered or impaired that the person is —", "title": "Worldwide" }, { "paragraph_id": 32, "text": "In Victoria the current defence of mental impairment was introduced in the Crimes (Mental Impairment and Unfitness to be Tried) Act 1997 which replaced the common law defence of insanity and indefinite detention at the governor's pleasure with the following:", "title": "Worldwide" }, { "paragraph_id": 33, "text": "These requirements are almost identical to the M'Naghten Rules, substituting \"mental impairment\" for \"disease of the mind\".", "title": "Worldwide" }, { "paragraph_id": 34, "text": "In New South Wales, the defence has been renamed the 'Defence of Mental Illness' in Part 4 of the Mental Health (Forensic Provisions) Act 1990. However, definitions of the defence are derived from M'Naghten's case and have not been codified. Whether a particular condition amounts to a disease of the mind is not a medical but a legal question to be decided in accordance with the ordinary rules of interpretation. This defence is an exception to the Woolmington v DPP (1935) 'golden thread', as the party raising the issue of the defence of mental illness bears the burden of proving this defence on the balance of probabilities. Generally, the defence will raise the issue of insanity. However, the prosecution can raise it in exceptional circumstances: R v Ayoub (1984).", "title": "Worldwide" }, { "paragraph_id": 35, "text": "Australian cases have further qualified and explained the M'Naghten Rules. The NSW Supreme Court has held there are two limbs to the M'Naghten Rules, that the accused did not know what he was doing, or that the accused did not appreciate that what he was doing was morally wrong, in both cases the accused must be operating under a 'defect of reason, from a disease of the mind'. The High Court in R v Porter stated that the condition of the accused's mind is relevant only at the time of the actus reus. In Woodbridge v The Queen the court stated that a symptom indicating a disease of the mind must be prone to recur and be the result of an underlying pathological infirmity. A 'defect of reason' is the inability to think rationally and pertains to incapacity to reason, rather than having unsound ideas or difficulty with such a task. Examples of disease of the mind include Arteriosclerosis (considered so because the hardening of the arteries affects the mind.", "title": "Worldwide" }, { "paragraph_id": 36, "text": "The defence of mental disorder is codified in section 16 of the Criminal Code which states, in part:", "title": "Worldwide" }, { "paragraph_id": 37, "text": "To establish a claim of mental disorder the party raising the issue must show on a balance of probabilities first that the person who committed the act was suffering from a \"disease of the mind\", and second, that at the time of the offence they were either 1) unable to appreciate the \"nature and quality\" of the act, or 2) did not know it was \"wrong\".", "title": "Worldwide" }, { "paragraph_id": 38, "text": "The meaning of the word \"wrong\" was determined in the Supreme Court case of R. v. Chaulk [1990] 3 S.C.R. which held that \"wrong\" was NOT restricted to \"legally wrong\" but to \"morally wrong\" as well.", "title": "Worldwide" }, { "paragraph_id": 39, "text": "The current legislative scheme was created by the Parliament of Canada after the previous scheme was found unconstitutional by the Supreme Court of Canada in R. v. Swain. The new provisions also replaced the old insanity defense with the current mental disorder defence.", "title": "Worldwide" }, { "paragraph_id": 40, "text": "Once a person is found not criminally responsible (\"NCR\"), they will have a hearing by a Review Board within 45 days (90 days if the court extends the delay). A Review Board is established under Part XX.1 of the Criminal Code and is composed of at least three members, a person who is a judge or eligible to be a judge, a psychiatrist and another expert in a relevant field, such as social work, criminology or psychology. Parties at a Review Board hearing are usually the accused, the Crown and the hospital responsible for the supervision or assessment of the accused. A Review Board is responsible for both accused persons found NCR or accused persons found unfit to stand trial on account of mental disorder. A Review Board dealing with an NCR offender must consider two questions: whether the accused is a \"significant threat to the safety of the public\" and, if so, what the \"least onerous and least restrictive\" restrictions on the liberty of the accused should be in order to mitigate such a threat. Proceedings before a Review Board are inquisitorial rather than adversarial. Often the Review Board will be active in conducting an inquiry. Where the Review Board is unable to conclude that the accused is a significant threat to the safety of the public, the review board must grant the accused an absolute discharge, an order essentially terminating the jurisdiction of the criminal law over the accused. Otherwise, the Review Board must order that the accused be either discharged subject to conditions or detained in a hospital, both subject to conditions. The conditions imposed must be the least onerous and least restrictive necessary to mitigate any danger the accused may pose to others.", "title": "Worldwide" }, { "paragraph_id": 41, "text": "Since the Review Board is empowered under criminal law powers under s. 91(27) of the Constitution Act, 1867 the sole justification for its jurisdiction is public safety. Therefore, the nature of the inquiry is the danger the accused may pose to public safety rather than whether the accused is \"cured\". For instance, many \"sick\" accused persons are discharged absolutely on the basis that they are not a danger to the public while many \"sane\" accused are detained on the basis that they are dangerous. Moreover, the notion of \"significant threat to the safety of the public\" is a \"criminal threat\". This means that the Review Board must find that the threat posed by the accused is of a criminal nature.", "title": "Worldwide" }, { "paragraph_id": 42, "text": "While proceedings before a Review Board are less formal than in court, there are many procedural safeguards available to the accused given the potential indefinite nature of Part XX.1. Any party may appeal against the decision of a Review Board.", "title": "Worldwide" }, { "paragraph_id": 43, "text": "In 1992 when the new mental disorder provisions were enacted, Parliament included \"capping\" provisions which were to be enacted at a later date. These capping provisions limited the jurisdiction of a Review Board over an accused based on the maximum potential sentence had the accused been convicted (e.g. there would be a cap of 5 years if the maximum penalty for the index offence is 5 years). However, these provisions were never proclaimed into force and were subsequently repealed.", "title": "Worldwide" }, { "paragraph_id": 44, "text": "A Review Board must hold a hearing every 12 months (unless extended to 24 months) until the accused is discharged absolutely.", "title": "Worldwide" }, { "paragraph_id": 45, "text": "The issue of mental disorder may also come into play before a trial even begins if the accused's mental state prevents the accused from being able to appreciate the nature of a trial and to conduct a defence.", "title": "Worldwide" }, { "paragraph_id": 46, "text": "An accused who is found to be unfit to stand trial is subject to the jurisdiction a Review Board. While the considerations are essentially the same, there are a few provisions which apply only to unfit accused. A Review Board must determine whether the accused is fit to stand trial. Regardless of the determination, the Review Board must then determine what conditions should be imposed on the accused, considering both the protection of the public and the maintenance of the fitness of the accused (or conditions which would render the accused fit). Previously an absolute discharge was unavailable to an unfit accused. However, in R. v. Demers, the Supreme Court of Canada struck down the provision restricting the availability of an absolute discharge to an accused person who is deemed both \"permanently unfit\" and not a significant threat to the safety of the public. Presently a Review Board may recommend a judicial stay of proceedings in the event that it finds the accused both \"permanently unfit\" and non-dangerous. The decision is left to the court having jurisdiction over the accused.", "title": "Worldwide" }, { "paragraph_id": 47, "text": "An additional requirement for an unfit accused is the holding of a \"prima facie case\" hearing every two years. The Crown must demonstrate to the court having jurisdiction over the accused that it still has sufficient evidence to try the accused. If the Crown fails to meet this burden then the accused is discharged and proceedings are terminated. The nature of the hearing is virtually identical to that of a preliminary hearing.", "title": "Worldwide" }, { "paragraph_id": 48, "text": "In Denmark a psychotic person who commits a criminal defense is declared guilty but is sentenced to mandatory treatment instead of prison. Section 16 of the penal code states that \"Persons, who, at the time of the act, were irresponsible owing to mental illness or similar conditions or to a pronounced mental deficiency, are not punishable\". This means that in Denmark, 'insanity' is a legal term rather than a medical term and that the court retains the authority to decide whether an accused person is irresponsible.", "title": "Worldwide" }, { "paragraph_id": 49, "text": "In Finland, punishments can only be administered if the accused is compos mentis, of sound mind; not if the accused is insane (syyntakeeton, literally \"unable to guarantee [shoulder the responsibility of] guilt\"). Thus, an insane defendant may be found guilty based on the facts and their actions just as a sane defendant, but the insanity will only affect the punishment. The definition of insanity is similar to the M'Naught criterion above: \"the accused is insane, if during the act, due to a mental illness, profound mental retardation or a severe disruption of mental health or consciousness, he cannot understand the actual nature of his act or its illegality, or that his ability to control his behavior is critically weakened\". If an accused is suspected to be insane, the court must consult the National Institute for Health and Welfare (THL), which is obliged to place the accused in involuntary commitment if they are found insane. The offender receives no judicial punishment; they become a patient under the jurisdiction of THL, and must be released immediately once the conditions of involuntary commitment are no longer fulfilled. Diminished responsibility is also available, resulting in lighter sentences.", "title": "Worldwide" }, { "paragraph_id": 50, "text": "According to section 20 of the German criminal code, those who commit an illegal act because a mental disorder makes them unable to see the wrong of the act or to act on this insight is considered not guilty. Section 63 stipulates that if the offender is deemed at risk of committing further offences that will harm others or cause grave economic damage, and if they therefore pose a continuing threat to public safety, they shall be committed to a psychiatric hospital in lieu of a custodial or suspended prison sentence.", "title": "Worldwide" }, { "paragraph_id": 51, "text": "If the ability to recognize the right or wrong of action or the ability to act accordingly is lost due to a mental disorder, then the defendant cannot be pursued under Japanese criminal law so if this is recognized during a trial then an innocent judgment will be given. This is, however, rare, happening in only around 1 in 500,000 cases.", "title": "Worldwide" }, { "paragraph_id": 52, "text": "Section 39 of the Dutch criminal code stipulates: \"Not culpable is he who performs an act that he cannot be imputed with due to the deficient development or pathological disorder of his mental faculties\". Obviously critical are the definitions of \"deficient development\" and/or \"pathological [mental] disorder\". These are to be verified by somatomedical and/or psychiatric specialists. An inculpability defense needs to conform to the following criteria:", "title": "Worldwide" }, { "paragraph_id": 53, "text": "If the inculpability defense succeeds, the defendant cannot be ordered to incarceration proper. If the defendant is deemed to be criminally insane (i.e. deemed to pose a risk to himself or others), the court instead may order involuntary admission to a mental institution for further evaluation and/or treatment. The court can opt for a definite period of time (when complete or at least sufficient recovery of mental faculties on a relatively short time scale is probable) or an indefinite period of time (when the defendant's ailment is deemed to be difficult or impossible to treat, or can be supposed to be refractory to treatment).", "title": "Worldwide" }, { "paragraph_id": 54, "text": "If the inculpability defense succeeds only partly ([i.e. if the crime cannot be completely excused because of a minor degree of deficient development or pathological (mental) disorder), there may still be a legal basis for a diminished culpability of the defendant; in such case, a diminished prison sentence should be ordered. This can also be combined with the aforementioned involuntary admission to a mental institution, although in these cases the two 'sentences' often run/are served in parallel.", "title": "Worldwide" }, { "paragraph_id": 55, "text": "In Norway, psychotic perpetrators are declared guilty but not punished and, instead of prison, they are sentenced to mandatory treatment. Section 44 of the penal code states specifically that \"a person who at the time of the crime was insane or unconscious is not punished\". It is the responsibility of a criminal court to consider whether the accused may have been psychotic or suffering from other severe mental defects when perpetrating a criminal act. Thus, even though he himself declared to be sane, the court hearing the case of Anders Behring Breivik considered the question of his sanity.", "title": "Worldwide" }, { "paragraph_id": 56, "text": "Insanity is determined through a judicial decision issued on the basis of expert opinions of psychiatrists and psychologists.", "title": "Worldwide" }, { "paragraph_id": 57, "text": "A forensic psychiatric examination is used to establish insanity. The result of the forensic examination is then subjected to a legal assessment, taking into account other circumstances of the case, from which a conclusion is drawn about the defendant's sanity or insanity. The Criminal Code of Russia establishes that a person who during the commission of an illegal act was in a state of insanity, that is, could not be aware of the actual nature and social danger of their actions or was unable to control them due to a chronic mental disorder, a temporary mental disorder, or dementia is not subject to criminal liability.", "title": "Worldwide" }, { "paragraph_id": 58, "text": "In Sweden, psychotic perpetrators are seen as accountable, but the sanction is, if they are psychotic at the time of the trial, forensic mental care.", "title": "Worldwide" }, { "paragraph_id": 59, "text": "Although use of the insanity defense is rare, since the Criminal Procedure (Insanity and Unfitness to Plead) Act 1991, insanity pleas have steadily increased in the UK.", "title": "Worldwide" }, { "paragraph_id": 60, "text": "The Scottish Law Commission, in its Discussion Paper No 122 on Insanity and Diminished Responsibility (2003) confirms that the law has not substantially changed from the position stated in Hume's Commentaries in 1797:", "title": "Worldwide" }, { "paragraph_id": 61, "text": "We may next attend to the case of those unfortunate persons, who have plead the miserable defense of idiocy or insanity. Which condition, if it is not an assumed or imperfect, but a genuine and thorough insanity, and is proved by the testimony of intelligent witnesses, makes the act like that of an infant, and equally bestows the privilege of an entire exemption from any manner of pain; Cum alterum innocentia concilii tuetur, alterum fati infelicitas excusat. I say, where the insanity is absolute, and is duly proved: For if reason and humanity enforce the plea in these circumstances, it is no less necessary to observe a caution and reserve in applying the law, as shall hinder it from being understood, that there is any privilege in a case of mere weakness of intellect, or a strange and moody humor, or a crazy and capricious or irritable temper. In none of these situations does or can the law excuse the offender. Because such constitutions are not exclusive of a competent understanding of the true state of the circumstances in which the deed is done, nor of the subsistence of some steady and evil passion, grounded in those circumstances, and directed to a certain object. To serve the purpose of a defense in law, the disorder must therefore amount to an absolute alienation of reason, ut continua mentis alienatione, omni intellectu careat – such a disease as deprives the patient of the knowledge of the true aspect and position of things about them - hinders them from distinguishing friend from foe – and gives them up to the impulse of their own distempered fancy.", "title": "Worldwide" }, { "paragraph_id": 62, "text": "The phrase \"absolute alienation of reason\" is still regarded as at the core of the defense in the modern law (see HM Advocate v Kidd (1960) JC 61 and Brennan v HM Advocate (1977)", "title": "Worldwide" }, { "paragraph_id": 63, "text": "In the United States, variances in the insanity defense between states, and in the federal court system, are attributable to differences with respect to three key issues:", "title": "Worldwide" }, { "paragraph_id": 64, "text": "In Foucha v. Louisiana (1992) the Supreme Court of the United States ruled that a person could not be held \"indefinitely\" for psychiatric treatment following a finding of not guilty by reason of insanity.", "title": "Worldwide" }, { "paragraph_id": 65, "text": "In the United States, a criminal defendant may plead insanity in federal court, and in the state courts of every state except for Idaho, Kansas, Montana, and Utah. However, defendants in states that disallow the insanity defense may still be able to demonstrate that a defendant was not capable of forming intent to commit a crime as a result of mental illness.", "title": "Worldwide" }, { "paragraph_id": 66, "text": "In Kahler v. Kansas (2020), the U.S. Supreme Court held, in a 6–3 ruling, that a state does not violate the Due Process Clause by abolishing an insanity defense based on a defendant's incapacity to distinguish right from wrong. The Court emphasized that state governments have broad discretion to choose laws defining \"the precise relationship between criminal culpability and mental illness.\"", "title": "Worldwide" }, { "paragraph_id": 67, "text": "Each state and the federal court system currently uses one of the following \"tests\" to define insanity for purposes of the insanity defense. Over its decades of use the definition of insanity has been modified by statute, with changes to the availability of the insanity defense, what constitutes legal insanity, whether the prosecutor or defendant has the burden of proof, the standard of proof required at trial, trial procedures, and to commitment and release procedures for defendants who have been acquitted based on a finding of insanity.", "title": "Worldwide" }, { "paragraph_id": 68, "text": "The guidelines for the M'Naghten Rules, state, among other things, and evaluating the criminal responsibility for defendants claiming to be insane were settled in the British courts in the case of Daniel M'Naghten in 1843. M'Naghten was a Scottish woodcutter who killed the secretary to the prime minister, Edward Drummond, in a botched attempt to assassinate the prime minister himself. M'Naghten apparently believed that the prime minister was the architect of the myriad of personal and financial misfortunes that had befallen him. During his trial, nine witnesses testified to the fact that he was insane, and the jury acquitted him, finding him \"not guilty by reason of insanity\".", "title": "Worldwide" }, { "paragraph_id": 69, "text": "The House of Lords asked the judges of the common law courts to answer five questions on insanity as a criminal defence, and the formulation that emerged from their review—that a defendant should not be held responsible for their actions only if, as a result of their mental disease or defect, they (i) did not know that their act would be wrong; or (ii) did not understand the nature and quality of their actions—became the basis of the law governing legal responsibility in cases of insanity in England. Under the rules, loss of control because of mental illness was no defense. The M'Naghten rule was embraced with almost no modification by American courts and legislatures for more than 100 years, until the mid-20th century.", "title": "Worldwide" }, { "paragraph_id": 70, "text": "The strict M'Naghten standard for the insanity defense was widely used until the 1950s and the case of Durham v. United States case. In the Durham case, the court ruled that a defendant is entitled to acquittal if the crime was the product of their mental illness (i.e., crime would not have been committed but for the disease). The test, also called the Product Test, is broader than either the M'Naghten test or the irresistible impulse test. The test has more lenient guidelines for the insanity defense, but it addressed the issue of convicting mentally ill defendants, which was allowed under the M'Naghten Rule. However, the Durham standard drew much criticism because of its expansive definition of legal insanity.", "title": "Worldwide" }, { "paragraph_id": 71, "text": "The Model Penal Code, published by the American Law Institute, provides a standard for legal insanity that serves as a compromise between the strict M'Naghten Rule, the lenient Durham ruling, and the irresistible impulse test. Under the MPC standard, which represents the modern trend, a defendant is not responsible for criminal conduct \"if at the time of such conduct as a result of mental disease or defect he lacks substantial capacity either to appreciate the criminality of their conduct or to conform their conduct to the requirements of the law.\" The test thus takes into account both the cognitive and volitional capacity of insanity.", "title": "Worldwide" }, { "paragraph_id": 72, "text": "After the perpetrator of President Reagan's assassination attempt was found not guilty by reason of insanity, Congress passed the Insanity Defense Reform Act of 1984. Under this act, the burden of proof was shifted from the prosecution to the defense and the standard of evidence in federal trials was increased from a preponderance of evidence to clear and convincing evidence. The ALI test was discarded in favor of a new test that more closely resembled M'Naghten's. Under this new test only perpetrators suffering from severe mental illnesses at the time of the crime could successfully employ the insanity defense. The defendant's ability to control himself or herself was no longer a consideration.", "title": "Worldwide" }, { "paragraph_id": 73, "text": "The Act also curbed the scope of expert psychiatric testimony and adopted stricter procedures regarding the hospitalization and release of those found not guilty by reason of insanity.", "title": "Worldwide" }, { "paragraph_id": 74, "text": "Those acquitted of a federal offense by reason of insanity have not been able to challenge their psychiatric confinement through a writ of habeas corpus or other remedies. In Archuleta v. Hedrick, 365 F.3d 644 (8th Cir. 2004), the U.S. Court of Appeals for the Eighth Circuit the court ruled persons found not guilty by reason of insanity and later want to challenge their confinement may not attack their initial successful insanity defense:", "title": "Worldwide" }, { "paragraph_id": 75, "text": "The appellate court affirmed the lower court's judgment: \"Having thus elected to make himself a member of that 'exceptional class' of persons who seek verdicts of not guilty by reason of insanity...he cannot now be heard to complain of the statutory consequences of his election.\" The court held that no direct attack upon the final judgment of acquittal by reason of insanity was possible. It also held that the collateral attack that he was not informed that a possible alternative to his commitment was to ask for a new trial was not a meaningful alternative.", "title": "Worldwide" }, { "paragraph_id": 76, "text": "As an alternative to the insanity defense, some jurisdictions permit a defendant to plead guilty but mentally ill. A defendant who is found guilty but mentally ill may be sentenced to mental health treatment, at the conclusion of which the defendant will serve the remainder of their sentence in the same manner as any other defendant.", "title": "Worldwide" }, { "paragraph_id": 77, "text": "In a majority of states, the burden of proving insanity is placed on the defendant, who must prove insanity by a preponderance of the evidence.", "title": "Worldwide" }, { "paragraph_id": 78, "text": "In a minority of states, the burden is placed on the prosecution, who must prove sanity beyond reasonable doubt.", "title": "Worldwide" }, { "paragraph_id": 79, "text": "In federal court the burden is placed on the defendant, who must prove insanity by clear and convincing evidence. See 18 U.S.C.S. Sec. 17(b); see also A.R.S. Sec. 13-502(C).", "title": "Worldwide" }, { "paragraph_id": 80, "text": "The insanity plea is used in the U.S Criminal Justice System in less than 1% of all criminal cases. Little is known about the criminal justice system and the mentally ill:", "title": "Worldwide" }, { "paragraph_id": 81, "text": "[T]here is no definitive study regarding the percentage of people with mental illness who come into contact with police, appear as criminal defendants, are incarcerated, or are under community supervision. Furthermore, the scope of this issue varies across jurisdictions. Accordingly, advocates should rely as much as possible on statistics collected by local and state government agencies.", "title": "Worldwide" }, { "paragraph_id": 82, "text": "Some U.S. states have begun to ban the use of the insanity defense, and in 1994 the Supreme Court denied a petition of certiorari seeking review of a Montana Supreme Court case that upheld Montana's abolition of the defense. Idaho, Kansas, and Utah have also banned the defense. However, a mentally ill defendant/patient can be found unfit to stand trial in these states. In 2001, the Nevada Supreme Court found that their state's abolition of the defense was unconstitutional as a violation of Federal due process. In 2006, the Supreme Court decided Clark v. Arizona upholding Arizona's limitations on the insanity defense. In that same ruling, the Court noted \"We have never held that the Constitution mandates an insanity defense, nor have we held that the Constitution does not so require.\" In 2020, the Supreme Court decided Kahler v. Kansas upholding Kansas' abolition of the insanity defense, stating that the Constitution does not require Kansas to adopt an insanity test that turns on a defendant's ability to recognize that their crime was morally wrong.", "title": "Worldwide" }, { "paragraph_id": 83, "text": "The insanity defense is also complicated because of the underlying differences in philosophy between psychiatrists/psychologists and legal professionals. In the United States, a psychiatrist, psychologist or other mental health professional is often consulted as an expert witness in insanity cases, but the ultimate legal judgment of the defendant's sanity is determined by a jury, not by a mental health professional. In other words, mental health professionals provide testimony and professional opinion but are not ultimately responsible for answering legal questions.", "title": "Worldwide" } ]
The insanity defense, also known as the mental disorder defense, is an affirmative defense by excuse in a criminal case, arguing that the defendant is not responsible for their actions due to a psychiatric disease at the time of the criminal act. This is contrasted with an excuse of provocation, in which the defendant is responsible, but the responsibility is lessened due to a temporary mental state. It is also contrasted with the justification of self defense or with the mitigation of imperfect self-defense. The insanity defense is also contrasted with a finding that a defendant cannot stand trial in a criminal case because a mental disease prevents them from effectively assisting counsel, from a civil finding in trusts and estates where a will is nullified because it was made when a mental disorder prevented a testator from recognizing the natural objects of their bounty, and from involuntary civil commitment to a mental institution, when anyone is found to be gravely disabled or to be a danger to themself or to others. Legal definitions of insanity or mental disorder are varied, and include the M'Naghten Rule, the Durham rule, the 1953 British Royal Commission on Capital Punishment report, the ALI rule, and other provisions, often relating to a lack of mens rea. In the criminal laws of Australia and Canada, statutory legislation enshrines the M'Naghten Rules, with the terms defense of mental disorder, defense of mental illness or not criminally responsible by reason of mental disorder employed. Being incapable of distinguishing right from wrong is one basis for being found to be legally insane as a criminal defense. It originated in the M'Naghten Rule, and has been reinterpreted and modernized through more recent cases, such as People v. Serravo. In the United Kingdom, Ireland, and the United States, use of the defense is rare. Mitigating factors, including things not eligible for the insanity defense such as intoxication and partial defenses such as diminished capacity and provocation, are used more frequently. The defense is based on evaluations by forensic mental health professionals with the appropriate test according to the jurisdiction. Their testimony guides the jury, but they are not allowed to testify to the accused's criminal responsibility, as this is a matter for the jury to decide. Similarly, mental health practitioners are restrained from making a judgment on the "ultimate issue"—whether the defendant is insane. Some jurisdictions require the evaluation to address the defendant's ability to control their behavior at the time of the offense. A defendant claiming the defense is pleading "not guilty by reason of insanity" (NGRI) or "guilty but insane or mentally ill" in some jurisdictions which, if successful, may result in the defendant being committed to a psychiatric facility for an indeterminate period.
2001-12-12T13:49:38Z
2023-12-31T01:04:34Z
[ "Template:Redirect", "Template:Reflist", "Template:ISBN", "Template:Cite Legislation AU", "Template:Cite AustLII", "Template:Globalize", "Template:Rp", "Template:See also", "Template:Div col", "Template:Div col end", "Template:Cite book", "Template:Cite BAILII", "Template:Cite news", "Template:Webarchive", "Template:Authority control", "Template:Short description", "Template:Use American English", "Template:Citation needed", "Template:Bquote", "Template:Blockquote", "Template:Cite journal", "Template:Ussc", "Template:Citation", "Template:Criminal defenses", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Insanity_defense
15,361
Ice age
An ice age is a long period of reduction in the temperature of Earth's surface and atmosphere, resulting in the presence or expansion of continental and polar ice sheets and alpine glaciers. Earth's climate alternates between ice ages and greenhouse periods, during which there are no glaciers on the planet. Earth is currently in the ice age called Quaternary glaciation. Individual pulses of cold climate within an ice age are termed glacial periods (or, alternatively, glacials, glaciations, glacial stages, stadials, stades, or colloquially, ice ages), and intermittent warm periods within an ice age are called interglacials or interstadials. In glaciology, ice age implies the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, Earth is in an interglacial period—the Holocene. The amount of anthropogenic greenhouse gases emitted into Earth's oceans and atmosphere is projected to delay the next glacial period, which otherwise would begin in around 50,000 years, by between 100,000 and 500,000 years. In 1742, Pierre Martel (1706–1767), an engineer and geographer living in Geneva, visited the valley of Chamonix in the Alps of Savoy. Two years later he published an account of his journey. He reported that the inhabitants of that valley attributed the dispersal of erratic boulders to the glaciers, saying that they had once extended much farther. Later similar explanations were reported from other regions of the Alps. In 1815 the carpenter and chamois hunter Jean-Pierre Perraudin (1767–1858) explained erratic boulders in the Val de Bagnes in the Swiss canton of Valais as being due to glaciers previously extending further. An unknown woodcutter from Meiringen in the Bernese Oberland advocated a similar idea in a discussion with the Swiss-German geologist Jean de Charpentier (1786–1855) in 1834. Comparable explanations are also known from the Val de Ferret in the Valais and the Seeland in western Switzerland and in Goethe's scientific work. Such explanations could also be found in other parts of the world. When the Bavarian naturalist Ernst von Bibra (1806–1878) visited the Chilean Andes in 1849–1850, the natives attributed fossil moraines to the former action of glaciers. Meanwhile, European scholars had begun to wonder what had caused the dispersal of erratic material. From the middle of the 18th century, some discussed ice as a means of transport. The Swedish mining expert Daniel Tilas (1712–1772) was, in 1742, the first person to suggest drifting sea ice was a cause of the presence of erratic boulders in the Scandinavian and Baltic regions. In 1795, the Scottish philosopher and gentleman naturalist, James Hutton (1726–1797), explained erratic boulders in the Alps by the action of glaciers. Two decades later, in 1818, the Swedish botanist Göran Wahlenberg (1780–1851) published his theory of a glaciation of the Scandinavian peninsula. He regarded glaciation as a regional phenomenon. Only a few years later, the Danish-Norwegian geologist Jens Esmark (1762–1839) argued for a sequence of worldwide ice ages. In a paper published in 1824, Esmark proposed changes in climate as the cause of those glaciations. He attempted to show that they originated from changes in Earth's orbit. Esmark discovered the similarity between moraines near Haukalivatnet lake near sea level in Rogaland and moraines at branches of Jostedalsbreen. Esmark's discovery were later attributed to or appropriated by Theodor Kjerulf and Louis Agassiz. During the following years, Esmark's ideas were discussed and taken over in parts by Swedish, Scottish and German scientists. At the University of Edinburgh Robert Jameson (1774–1854) seemed to be relatively open to Esmark's ideas, as reviewed by Norwegian professor of glaciology Bjørn G. Andersen (1992). Jameson's remarks about ancient glaciers in Scotland were most probably prompted by Esmark. In Germany, Albrecht Reinhard Bernhardi (1797–1849), a geologist and professor of forestry at an academy in Dreissigacker (since incorporated in the southern Thuringian city of Meiningen), adopted Esmark's theory. In a paper published in 1832, Bernhardi speculated about the polar ice caps once reaching as far as the temperate zones of the globe. In Val de Bagnes, a valley in the Swiss Alps, there was a long-held local belief that the valley had once been covered deep in ice, and in 1815 a local chamois hunter called Jean-Pierre Perraudin attempted to convert the geologist Jean de Charpentier to the idea, pointing to deep striations in the rocks and giant erratic boulders as evidence. Charpentier held the general view that these signs were caused by vast floods, and he rejected Perraudin's theory as absurd. In 1818 the engineer Ignatz Venetz joined Perraudin and Charpentier to examine a proglacial lake above the valley created by an ice dam as a result of the 1815 eruption of Mount Tambora, which threatened to cause a catastrophic flood when the dam broke. Perraudin attempted unsuccessfully to convert his companions to his theory, but when the dam finally broke, there were only minor erratics and no striations, and Venetz concluded that Perraudin was right and that only ice could have caused such major results. In 1821 he read a prize-winning paper on the theory to the Swiss Society, but it was not published until Charpentier, who had also become converted, published it with his own more widely read paper in 1834. In the meantime, the German botanist Karl Friedrich Schimper (1803–1867) was studying mosses which were growing on erratic boulders in the alpine upland of Bavaria. He began to wonder where such masses of stone had come from. During the summer of 1835 he made some excursions to the Bavarian Alps. Schimper came to the conclusion that ice must have been the means of transport for the boulders in the alpine upland. In the winter of 1835–36 he held some lectures in Munich. Schimper then assumed that there must have been global times of obliteration ("Verödungszeiten") with a cold climate and frozen water. Schimper spent the summer months of 1836 at Devens, near Bex, in the Swiss Alps with his former university friend Louis Agassiz (1801–1873) and Jean de Charpentier. Schimper, Charpentier and possibly Venetz convinced Agassiz that there had been a time of glaciation. During the winter of 1836–37, Agassiz and Schimper developed the theory of a sequence of glaciations. They mainly drew upon the preceding works of Venetz, Charpentier and on their own fieldwork. Agassiz appears to have been already familiar with Bernhardi's paper at that time. At the beginning of 1837, Schimper coined the term "ice age" ("Eiszeit") for the period of the glaciers. In July 1837 Agassiz presented their synthesis before the annual meeting of the Swiss Society for Natural Research at Neuchâtel. The audience was very critical, and some were opposed to the new theory because it contradicted the established opinions on climatic history. Most contemporary scientists thought that Earth had been gradually cooling down since its birth as a molten globe. In order to persuade the skeptics, Agassiz embarked on geological fieldwork. He published his book Study on Glaciers ("Études sur les glaciers") in 1840. Charpentier was put out by this, as he had also been preparing a book about the glaciation of the Alps. Charpentier felt that Agassiz should have given him precedence as it was he who had introduced Agassiz to in-depth glacial research. As a result of personal quarrels, Agassiz had also omitted any mention of Schimper in his book. It took several decades before the ice age theory was fully accepted by scientists. This happened on an international scale in the second half of the 1870s, following the work of James Croll, including the publication of Climate and Time, in Their Geological Relations in 1875, which provided a credible explanation for the causes of ice ages. There are three main types of evidence for ice ages: geological, chemical, and paleontological. Geological evidence for ice ages comes in various forms, including rock scouring and scratching, glacial moraines, drumlins, valley cutting, and the deposition of till or tillites and glacial erratics. Successive glaciations tend to distort and erase the geological evidence for earlier glaciations, making it difficult to interpret. Furthermore, this evidence was difficult to date exactly; early theories assumed that the glacials were short compared to the long interglacials. The advent of sediment and ice cores revealed the true situation: glacials are long, interglacials short. It took some time for the current theory to be worked out. The chemical evidence mainly consists of variations in the ratios of isotopes in fossils present in sediments and sedimentary rocks and ocean sediment cores. For the most recent glacial periods, ice cores provide climate proxies, both from the ice itself and from atmospheric samples provided by included bubbles of air. Because water containing lighter isotopes has a lower heat of evaporation, its proportion decreases with warmer conditions. This allows a temperature record to be constructed. This evidence can be confounded, however, by other factors recorded by isotope ratios. The paleontological evidence consists of changes in the geographical distribution of fossils. During a glacial period, cold-adapted organisms spread into lower latitudes, and organisms that prefer warmer conditions become extinct or retreat into lower latitudes. This evidence is also difficult to interpret because it requires: Despite the difficulties, analysis of ice core and ocean sediment cores has provided a credible record of glacials and interglacials over the past few million years. These also confirm the linkage between ice ages and continental crust phenomena such as glacial moraines, drumlins, and glacial erratics. Hence the continental crust phenomena are accepted as good evidence of earlier ice ages when they are found in layers created much earlier than the time range for which ice cores and ocean sediment cores are available. There have been at least five major ice ages in Earth's history (the Huronian, Cryogenian, Andean-Saharan, late Paleozoic, and the latest Quaternary Ice Age). Outside these ages, Earth was previously thought to have been ice-free even in high latitudes; such periods are known as greenhouse periods. However, other studies dispute this, finding evidence of occasional glaciations at high latitudes even during apparent greenhouse periods. Rocks from the earliest well-established ice age, called the Huronian, have been dated to around 2.4 to 2.1 billion years ago during the early Proterozoic Eon. Several hundreds of kilometers of the Huronian Supergroup are exposed 10 to 100 kilometers (6 to 62 mi) north of the north shore of Lake Huron, extending from near Sault Ste. Marie to Sudbury, northeast of Lake Huron, with giant layers of now-lithified till beds, dropstones, varves, outwash, and scoured basement rocks. Correlative Huronian deposits have been found near Marquette, Michigan, and correlation has been made with Paleoproterozoic glacial deposits from Western Australia. The Huronian ice age was caused by the elimination of atmospheric methane, a greenhouse gas, during the Great Oxygenation Event. The next well-documented ice age, and probably the most severe of the last billion years, occurred from 720 to 630 million years ago (the Cryogenian period) and may have produced a Snowball Earth in which glacial ice sheets reached the equator, possibly being ended by the accumulation of greenhouse gases such as CO2 produced by volcanoes. "The presence of ice on the continents and pack ice on the oceans would inhibit both silicate weathering and photosynthesis, which are the two major sinks for CO2 at present." It has been suggested that the end of this ice age was responsible for the subsequent Ediacaran and Cambrian explosion, though this model is recent and controversial. The Andean-Saharan occurred from 460 to 420 million years ago, during the Late Ordovician and the Silurian period. The evolution of land plants at the onset of the Devonian period caused a long term increase in planetary oxygen levels and reduction of CO2 levels, which resulted in the late Paleozoic icehouse. Its former name, the Karoo glaciation, was named after the glacial tills found in the Karoo region of South Africa. There were extensive polar ice caps at intervals from 360 to 260 million years ago in South Africa during the Carboniferous and early Permian periods. Correlatives are known from Argentina, also in the center of the ancient supercontinent Gondwanaland. Although the Mesozoic Era retained a greenhouse climate over its timespan and was previously assumed to have been entirely glaciation-free, more recent studies suggest that brief periods of glaciation occurred in both hemispheres during the Early Cretaceous. Geologic and palaeoclimatological records suggest the existence of glacial periods during the Valanginian, Hauterivian, and Aptian stages of the Early Cretaceous. Ice-rafted glacial dropstones indicate that in the Northern Hemisphere, ice sheets may have extended as far south as the Iberian Peninsula during the Hauterivian and Aptian. Although ice sheets largely disappeared from Earth for the rest of the period (potential reports from the Turonian, otherwise the warmest period of the Phanerozoic, are disputed), ice sheets and associated sea ice appear to have briefly returned to Antarctica near the very end of the Maastrichtian just prior to the Cretaceous-Paleogene extinction event. The Quaternary Glaciation / Quaternary Ice Age started about 2.58 million years ago at the beginning of the Quaternary Period when the spread of ice sheets in the Northern Hemisphere began. Since then, the world has seen cycles of glaciation with ice sheets advancing and retreating on 40,000- and 100,000-year time scales called glacial periods, glacials or glacial advances, and interglacial periods, interglacials or glacial retreats. Earth is currently in an interglacial, and the last glacial period ended about 11,700 years ago. All that remains of the continental ice sheets are the Greenland and Antarctic ice sheets and smaller glaciers such as on Baffin Island. The definition of the Quaternary as beginning 2.58 Ma is based on the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Ma, in the mid-Cenozoic (Eocene-Oligocene Boundary). The term Late Cenozoic Ice Age is used to include this early phase. Ice ages can be further divided by location and time; for example, the names Riss (180,000–130,000 years bp) and Würm (70,000–10,000 years bp) refer specifically to glaciation in the Alpine region. The maximum extent of the ice is not maintained for the full interval. The scouring action of each glaciation tends to remove most of the evidence of prior ice sheets almost completely, except in regions where the later sheet does not achieve full coverage. Within the current glaciation, more temperate and more severe periods have occurred. The colder periods are called glacial periods, the warmer periods interglacials, such as the Eemian Stage. There is evidence that similar glacial cycles occurred in previous glaciations, including the Andean-Saharan and the late Paleozoic ice house. The glacial cycles of the late Paleozoic ice house are likely responsible for the deposition of cyclothems. Glacials are characterized by cooler and drier climates over most of Earth and large land and sea ice masses extending outward from the poles. Mountain glaciers in otherwise unglaciated areas extend to lower elevations due to a lower snow line. Sea levels drop due to the removal of large volumes of water above sea level in the icecaps. There is evidence that ocean circulation patterns are disrupted by glaciations. The glacials and interglacials coincide with changes in orbital forcing of climate due to Milankovitch cycles, which are periodic changes in Earth's orbit and the tilt of Earth's rotational axis. Earth has been in an interglacial period known as the Holocene for around 11,700 years, and an article in Nature in 2004 argues that it might be most analogous to a previous interglacial that lasted 28,000 years. Predicted changes in orbital forcing suggest that the next glacial period would begin at least 50,000 years from now. Moreover, anthropogenic forcing from increased greenhouse gases is estimated to potentially outweigh the orbital forcing of the Milankovitch cycles for hundreds of thousands of years. Each glacial period is subject to positive feedback which makes it more severe, and negative feedback which mitigates and (in all cases so far) eventually ends it. An important form of feedback is provided by Earth's albedo, which is how much of the sun's energy is reflected rather than absorbed by Earth. Ice and snow increase Earth's albedo, while forests reduce its albedo. When the air temperature decreases, ice and snow fields grow, and they reduce forest cover. This continues until competition with a negative feedback mechanism forces the system to an equilibrium. One theory is that when glaciers form, two things happen: the ice grinds rocks into dust, and the land becomes dry and arid. This allows winds to transport iron rich dust into the open ocean, where it acts as a fertilizer that causes massive algal blooms that pulls large amounts of CO2 out of the atmosphere. This in turn makes it even colder and causes the glaciers to grow more. In 1956, Ewing and Donn hypothesized that an ice-free Arctic Ocean leads to increased snowfall at high latitudes. When low-temperature ice covers the Arctic Ocean there is little evaporation or sublimation and the polar regions are quite dry in terms of precipitation, comparable to the amount found in mid-latitude deserts. This low precipitation allows high-latitude snowfalls to melt during the summer. An ice-free Arctic Ocean absorbs solar radiation during the long summer days, and evaporates more water into the Arctic atmosphere. With higher precipitation, portions of this snow may not melt during the summer and so glacial ice can form at lower altitudes and more southerly latitudes, reducing the temperatures over land by increased albedo as noted above. Furthermore, under this hypothesis the lack of oceanic pack ice allows increased exchange of waters between the Arctic and the North Atlantic Oceans, warming the Arctic and cooling the North Atlantic. (Current projected consequences of global warming include a brief ice-free Arctic Ocean period by 2050.) Additional fresh water flowing into the North Atlantic during a warming cycle may also reduce the global ocean water circulation. Such a reduction (by reducing the effects of the Gulf Stream) would have a cooling effect on northern Europe, which in turn would lead to increased low-latitude snow retention during the summer. It has also been suggested that during an extensive glacial, glaciers may move through the Gulf of Saint Lawrence, extending into the North Atlantic Ocean far enough to block the Gulf Stream. Ice sheets that form during glaciations erode the land beneath them. This can reduce the land area above sea level and thus diminish the amount of space on which ice sheets can form. This mitigates the albedo feedback, as does the rise in sea level that accompanies the reduced area of ice sheets, since open ocean has a lower albedo than land. Another negative feedback mechanism is the increased aridity occurring with glacial maxima, which reduces the precipitation available to maintain glaciation. The glacial retreat induced by this or any other process can be amplified by similar inverse positive feedbacks as for glacial advances. According to research published in Nature Geoscience, human emissions of carbon dioxide (CO2) will defer the next glacial period. Researchers used data on Earth's orbit to find the historical warm interglacial period that looks most like the current one and from this have predicted that the next glacial period would usually begin within 1,500 years. They go on to predict that emissions have been so high that it will not. The causes of ice ages are not fully understood for either the large-scale ice age periods or the smaller ebb and flow of glacial–interglacial periods within an ice age. The consensus is that several factors are important: atmospheric composition, such as the concentrations of carbon dioxide and methane (the specific levels of the previously mentioned gases are now able to be seen with the new ice core samples from the European Project for Ice Coring in Antarctica (EPICA) Dome C in Antarctica over the past 800,000 years); changes in Earth's orbit around the Sun known as Milankovitch cycles; the motion of tectonic plates resulting in changes in the relative location and amount of continental and oceanic crust on Earth's surface, which affect wind and ocean currents; variations in solar output; the orbital dynamics of the Earth–Moon system; the impact of relatively large meteorites and volcanism including eruptions of supervolcanoes. Some of these factors influence each other. For example, changes in Earth's atmospheric composition (especially the concentrations of greenhouse gases) may alter the climate, while climate change itself can change the atmospheric composition (for example by changing the rate at which weathering removes CO2). Maureen Raymo, William Ruddiman and others propose that the Tibetan and Colorado Plateaus are immense CO2 "scrubbers" with a capacity to remove enough CO2 from the global atmosphere to be a significant causal factor of the 40 million year Cenozoic Cooling trend. They further claim that approximately half of their uplift (and CO2 "scrubbing" capacity) occurred in the past 10 million years. There is evidence that greenhouse gas levels fell at the start of ice ages and rose during the retreat of the ice sheets, but it is difficult to establish cause and effect (see the notes above on the role of weathering). Greenhouse gas levels may also have been affected by other factors which have been proposed as causes of ice ages, such as the movement of continents and volcanism. The Snowball Earth hypothesis maintains that the severe freezing in the late Proterozoic was ended by an increase in CO2 levels in the atmosphere, mainly from volcanoes, and some supporters of Snowball Earth argue that it was caused in the first place by a reduction in atmospheric CO2. The hypothesis also warns of future Snowball Earths. In 2009, further evidence was provided that changes in solar insolation provide the initial trigger for Earth to warm after an Ice Age, with secondary factors like increases in greenhouse gases accounting for the magnitude of the change. The geological record appears to show that ice ages start when the continents are in positions which block or reduce the flow of warm water from the equator to the poles and thus allow ice sheets to form. The ice sheets increase Earth's reflectivity and thus reduce the absorption of solar radiation. With less radiation absorbed the atmosphere cools; the cooling allows the ice sheets to grow, which further increases reflectivity in a positive feedback loop. The ice age continues until the reduction in weathering causes an increase in the greenhouse effect. There are three main contributors from the layout of the continents that obstruct the movement of warm water to the poles: Since today's Earth has a continent over the South Pole and an almost land-locked ocean over the North Pole, geologists believe that Earth will continue to experience glacial periods in the geologically near future. Some scientists believe that the Himalayas are a major factor in the current ice age, because these mountains have increased Earth's total rainfall and therefore the rate at which carbon dioxide is washed out of the atmosphere, decreasing the greenhouse effect. The Himalayas' formation started about 70 million years ago when the Indo-Australian Plate collided with the Eurasian Plate, and the Himalayas are still rising by about 5 mm per year because the Indo-Australian plate is still moving at 67 mm/year. The history of the Himalayas broadly fits the long-term decrease in Earth's average temperature since the mid-Eocene, 40 million years ago. Another important contribution to ancient climate regimes is the variation of ocean currents, which are modified by continent position, sea levels and salinity, as well as other factors. They have the ability to cool (e.g. aiding the creation of Antarctic ice) and the ability to warm (e.g. giving the British Isles a temperate as opposed to a boreal climate). The closing of the Isthmus of Panama about 3 million years ago may have ushered in the present period of strong glaciation over North America by ending the exchange of water between the tropical Atlantic and Pacific Oceans. Analyses suggest that ocean current fluctuations can adequately account for recent glacial oscillations. During the last glacial period the sea-level has fluctuated 20–30 m as water was sequestered, primarily in the Northern Hemisphere ice sheets. When ice collected and the sea level dropped sufficiently, flow through the Bering Strait (the narrow strait between Siberia and Alaska is about 50 m deep today) was reduced, resulting in increased flow from the North Atlantic. This realigned the thermohaline circulation in the Atlantic, increasing heat transport into the Arctic, which melted the polar ice accumulation and reduced other continental ice sheets. The release of water raised sea levels again, restoring the ingress of colder water from the Pacific with an accompanying shift to northern hemisphere ice accumulation. According to a study published in Nature in 2021, all glacial periods of ice ages over the last 1.5 million years were associated with northward shifts of melting Antarctic icebergs which changed ocean circulation patterns, leading to more CO2 being pulled out of the atmosphere. The authors suggest that this process may be disrupted in the future as the Southern Ocean will become too warm for the icebergs to travel far enough to trigger these changes. Matthias Kuhle's geological theory of Ice Age development was suggested by the existence of an ice sheet covering the Tibetan Plateau during the Ice Ages (Last Glacial Maximum?). According to Kuhle, the plate-tectonic uplift of Tibet past the snow-line has led to a surface of c. 2,400,000 square kilometres (930,000 sq mi) changing from bare land to ice with a 70% greater albedo. The reflection of energy into space resulted in a global cooling, triggering the Pleistocene Ice Age. Because this highland is at a subtropical latitude, with 4 to 5 times the insolation of high-latitude areas, what would be Earth's strongest heating surface has turned into a cooling surface. Kuhle explains the interglacial periods by the 100,000-year cycle of radiation changes due to variations in Earth's orbit. This comparatively insignificant warming, when combined with the lowering of the Nordic inland ice areas and Tibet due to the weight of the superimposed ice-load, has led to the repeated complete thawing of the inland ice areas. The Milankovitch cycles are a set of cyclic variations in characteristics of Earth's orbit around the Sun. Each cycle has a different length, so at some times their effects reinforce each other and at other times they (partially) cancel each other. There is strong evidence that the Milankovitch cycles affect the occurrence of glacial and interglacial periods within an ice age. The present ice age is the most studied and best understood, particularly the last 400,000 years, since this is the period covered by ice cores that record atmospheric composition and proxies for temperature and ice volume. Within this period, the match of glacial/interglacial frequencies to the Milanković orbital forcing periods is so close that orbital forcing is generally accepted. The combined effects of the changing distance to the Sun, the precession of Earth's axis, and the changing tilt of Earth's axis redistribute the sunlight received by Earth. Of particular importance are changes in the tilt of Earth's axis, which affect the intensity of seasons. For example, the amount of solar influx in July at 65 degrees north latitude varies by as much as 22% (from 450 W/m to 550 W/m). It is widely believed that ice sheets advance when summers become too cool to melt all of the accumulated snowfall from the previous winter. Some believe that the strength of the orbital forcing is too small to trigger glaciations, but feedback mechanisms like CO2 may explain this mismatch. While Milankovitch forcing predicts that cyclic changes in Earth's orbital elements can be expressed in the glaciation record, additional explanations are necessary to explain which cycles are observed to be most important in the timing of glacial–interglacial periods. In particular, during the last 800,000 years, the dominant period of glacial–interglacial oscillation has been 100,000 years, which corresponds to changes in Earth's orbital eccentricity and orbital inclination. Yet this is by far the weakest of the three frequencies predicted by Milankovitch. During the period 3.0–0.8 million years ago, the dominant pattern of glaciation corresponded to the 41,000-year period of changes in Earth's obliquity (tilt of the axis). The reasons for dominance of one frequency versus another are poorly understood and an active area of current research, but the answer probably relates to some form of resonance in Earth's climate system. Recent work suggests that the 100K year cycle dominates due to increased southern-pole sea-ice increasing total solar reflectivity. The "traditional" Milankovitch explanation struggles to explain the dominance of the 100,000-year cycle over the last 8 cycles. Richard A. Muller, Gordon J. F. MacDonald, and others have pointed out that those calculations are for a two-dimensional orbit of Earth but the three-dimensional orbit also has a 100,000-year cycle of orbital inclination. They proposed that these variations in orbital inclination lead to variations in insolation, as Earth moves in and out of known dust bands in the solar system. Although this is a different mechanism to the traditional view, the "predicted" periods over the last 400,000 years are nearly the same. The Muller and MacDonald theory, in turn, has been challenged by Jose Antonio Rial. Another worker, William Ruddiman, has suggested a model that explains the 100,000-year cycle by the modulating effect of eccentricity (weak 100,000-year cycle) on precession (26,000-year cycle) combined with greenhouse gas feedbacks in the 41,000- and 26,000-year cycles. Yet another theory has been advanced by Peter Huybers who argued that the 41,000-year cycle has always been dominant, but that Earth has entered a mode of climate behavior where only the second or third cycle triggers an ice age. This would imply that the 100,000-year periodicity is really an illusion created by averaging together cycles lasting 80,000 and 120,000 years. This theory is consistent with a simple empirical multi-state model proposed by Didier Paillard. Paillard suggests that the late Pleistocene glacial cycles can be seen as jumps between three quasi-stable climate states. The jumps are induced by the orbital forcing, while in the early Pleistocene the 41,000-year glacial cycles resulted from jumps between only two climate states. A dynamical model explaining this behavior was proposed by Peter Ditlevsen. This is in support of the suggestion that the late Pleistocene glacial cycles are not due to the weak 100,000-year eccentricity cycle, but a non-linear response to mainly the 41,000-year obliquity cycle. There are at least two types of variation in the Sun's energy output: The long-term increase in the Sun's output cannot be a cause of ice ages. Volcanic eruptions may have contributed to the inception and/or the end of ice age periods. At times during the paleoclimate, carbon dioxide levels were two or three times greater than today. Volcanoes and movements in continental plates contributed to high amounts of CO2 in the atmosphere. Carbon dioxide from volcanoes probably contributed to periods with highest overall temperatures. One suggested explanation of the Paleocene–Eocene Thermal Maximum is that undersea volcanoes released methane from clathrates and thus caused a large and rapid increase in the greenhouse effect. There appears to be no geological evidence for such eruptions at the right time, but this does not prove they did not happen. The current geological period, the Quaternary, which began about 2.6 million years ago and extends into the present, is marked by warm and cold episodes, cold phases called glacials (Quaternary ice age) lasting about 100,000 years, and which are then interrupted by the warmer interglacials which lasted about 10,000–15,000 years. The last cold episode of the Last Glacial Period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene. The major glacial stages of the current ice age in North America are the Illinoian, Eemian, and Wisconsin glaciation. The use of the Nebraskan, Afton, Kansan, and Yarmouthian stages to subdivide the ice age in North America has been discontinued by Quaternary geologists and geomorphologists. These stages have all been merged into the Pre-Illinoian in the 1980s. During the most recent North American glaciation, during the latter part of the Last Glacial Maximum (26,000 to 13,300 years ago), ice sheets extended to about 45th parallel north. These sheets were 3 to 4 kilometres (1.9 to 2.5 mi) thick. This Wisconsin glaciation left widespread impacts on the North American landscape. The Great Lakes and the Finger Lakes were carved by ice deepening old valleys. Most of the lakes in Minnesota and Wisconsin were gouged out by glaciers and later filled with glacial meltwaters. The old Teays River drainage system was radically altered and largely reshaped into the Ohio River drainage system. Other rivers were dammed and diverted to new channels, such as Niagara Falls, which formed a dramatic waterfall and gorge, when the waterflow encountered a limestone escarpment. Another similar waterfall, at the present Clark Reservation State Park near Syracuse, New York, is now dry. The area from Long Island to Nantucket, Massachusetts was formed from glacial till, and the plethora of lakes on the Canadian Shield in northern Canada can be almost entirely attributed to the action of the ice. As the ice retreated and the rock dust dried, winds carried the material hundreds of miles, forming beds of loess many dozens of feet thick in the Missouri Valley. Post-glacial rebound continues to reshape the Great Lakes and other areas formerly under the weight of the ice sheets. The Driftless Area, a portion of western and southwestern Wisconsin along with parts of adjacent Minnesota, Iowa, and Illinois, was not covered by glaciers. A specially interesting climatic change during glacial times has taken place in the semi-arid Andes. Beside the expected cooling down in comparison with the current climate, a significant precipitation change happened here. So, researches in the presently semiarid subtropic Aconcagua-massif (6,962 m) have shown an unexpectedly extensive glacial glaciation of the type "ice stream network". The connected valley glaciers exceeding 100 km in length, flowed down on the East-side of this section of the Andes at 32–34°S and 69–71°W as far as a height of 2,060 m and on the western luff-side still clearly deeper. Where current glaciers scarcely reach 10 km in length, the snowline (ELA) runs at a height of 4,600 m and at that time was lowered to 3,200 m asl, i.e. about 1,400 m. From this follows that—beside of an annual depression of temperature about c. 8.4 °C— here was an increase in precipitation. Accordingly, at glacial times the humid climatic belt that today is situated several latitude degrees further to the S, was shifted much further to the N. Although the last glacial period ended more than 8,000 years ago, its effects can still be felt today. For example, the moving ice carved out the landscape in Canada (See Canadian Arctic Archipelago), Greenland, northern Eurasia and Antarctica. The erratic boulders, till, drumlins, eskers, fjords, kettle lakes, moraines, cirques, horns, etc., are typical features left behind by the glaciers. The weight of the ice sheets was so great that they deformed Earth's crust and mantle. After the ice sheets melted, the ice-covered land rebounded. Due to the high viscosity of Earth's mantle, the flow of mantle rocks which controls the rebound process is very slow—at a rate of about 1 cm/year near the center of rebound area today. During glaciation, water was taken from the oceans to form the ice at high latitudes, thus global sea level dropped by about 110 meters, exposing the continental shelves and forming land-bridges between land-masses for animals to migrate. During deglaciation, the melted ice-water returned to the oceans, causing sea level to rise. This process can cause sudden shifts in coastlines and hydration systems resulting in newly submerged lands, emerging lands, collapsed ice dams resulting in salination of lakes, new ice dams creating vast areas of freshwater, and a general alteration in regional weather patterns on a large but temporary scale. It can even cause temporary reglaciation. This type of chaotic pattern of rapidly changing land, ice, saltwater and freshwater has been proposed as the likely model for the Baltic and Scandinavian regions, as well as much of central North America at the end of the last glacial maximum, with the present-day coastlines only being achieved in the last few millennia of prehistory. Also, the effect of elevation on Scandinavia submerged a vast continental plain that had existed under much of what is now the North Sea, connecting the British Isles to Continental Europe. The redistribution of ice-water on the surface of Earth and the flow of mantle rocks causes changes in the gravitational field as well as changes to the distribution of the moment of inertia of Earth. These changes to the moment of inertia result in a change in the angular velocity, axis, and wobble of Earth's rotation. The weight of the redistributed surface mass loaded the lithosphere, caused it to flex and also induced stress within Earth. The presence of the glaciers generally suppressed the movement of faults below. During deglaciation, the faults experience accelerated slip triggering earthquakes. Earthquakes triggered near the ice margin may in turn accelerate ice calving and may account for the Heinrich events. As more ice is removed near the ice margin, more intraplate earthquakes are induced and this positive feedback may explain the fast collapse of ice sheets. In Europe, glacial erosion and isostatic sinking from weight of ice made the Baltic Sea, which before the Ice Age was all land drained by the Eridanos River. A 2015 report by the Past Global Changes Project says simulations show that a new glaciation is unlikely to happen within the next approximately 50,000 years, before the next strong drop in Northern Hemisphere summer insolation occurs "if either atmospheric CO2 concentration remains above 300 ppm or cumulative carbon emissions exceed 1000 Pg C" (i.e. 1,000 gigatonnes carbon). "Only for an atmospheric CO2 content below the preindustrial level may a glaciation occur within the next 10 ka. ... Given the continued anthropogenic CO2 emissions, glacial inception is very unlikely to occur in the next 50 ka, because the timescale for CO2 and temperature reduction toward unperturbed values in the absence of active removal is very long [IPCC, 2013], and only weak precessional forcing occurs in the next two precessional cycles." (A precessional cycle is around 21,000 years, the time it takes for the perihelion to move all the way around the tropical year.) Ice ages go through cycles of about 100,000 years, but the next one may well be avoided due to our carbon dioxide emissions.
[ { "paragraph_id": 0, "text": "An ice age is a long period of reduction in the temperature of Earth's surface and atmosphere, resulting in the presence or expansion of continental and polar ice sheets and alpine glaciers. Earth's climate alternates between ice ages and greenhouse periods, during which there are no glaciers on the planet. Earth is currently in the ice age called Quaternary glaciation. Individual pulses of cold climate within an ice age are termed glacial periods (or, alternatively, glacials, glaciations, glacial stages, stadials, stades, or colloquially, ice ages), and intermittent warm periods within an ice age are called interglacials or interstadials.", "title": "" }, { "paragraph_id": 1, "text": "In glaciology, ice age implies the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, Earth is in an interglacial period—the Holocene. The amount of anthropogenic greenhouse gases emitted into Earth's oceans and atmosphere is projected to delay the next glacial period, which otherwise would begin in around 50,000 years, by between 100,000 and 500,000 years.", "title": "" }, { "paragraph_id": 2, "text": "In 1742, Pierre Martel (1706–1767), an engineer and geographer living in Geneva, visited the valley of Chamonix in the Alps of Savoy. Two years later he published an account of his journey. He reported that the inhabitants of that valley attributed the dispersal of erratic boulders to the glaciers, saying that they had once extended much farther. Later similar explanations were reported from other regions of the Alps. In 1815 the carpenter and chamois hunter Jean-Pierre Perraudin (1767–1858) explained erratic boulders in the Val de Bagnes in the Swiss canton of Valais as being due to glaciers previously extending further. An unknown woodcutter from Meiringen in the Bernese Oberland advocated a similar idea in a discussion with the Swiss-German geologist Jean de Charpentier (1786–1855) in 1834. Comparable explanations are also known from the Val de Ferret in the Valais and the Seeland in western Switzerland and in Goethe's scientific work. Such explanations could also be found in other parts of the world. When the Bavarian naturalist Ernst von Bibra (1806–1878) visited the Chilean Andes in 1849–1850, the natives attributed fossil moraines to the former action of glaciers.", "title": "History of research" }, { "paragraph_id": 3, "text": "Meanwhile, European scholars had begun to wonder what had caused the dispersal of erratic material. From the middle of the 18th century, some discussed ice as a means of transport. The Swedish mining expert Daniel Tilas (1712–1772) was, in 1742, the first person to suggest drifting sea ice was a cause of the presence of erratic boulders in the Scandinavian and Baltic regions. In 1795, the Scottish philosopher and gentleman naturalist, James Hutton (1726–1797), explained erratic boulders in the Alps by the action of glaciers. Two decades later, in 1818, the Swedish botanist Göran Wahlenberg (1780–1851) published his theory of a glaciation of the Scandinavian peninsula. He regarded glaciation as a regional phenomenon.", "title": "History of research" }, { "paragraph_id": 4, "text": "Only a few years later, the Danish-Norwegian geologist Jens Esmark (1762–1839) argued for a sequence of worldwide ice ages. In a paper published in 1824, Esmark proposed changes in climate as the cause of those glaciations. He attempted to show that they originated from changes in Earth's orbit. Esmark discovered the similarity between moraines near Haukalivatnet lake near sea level in Rogaland and moraines at branches of Jostedalsbreen. Esmark's discovery were later attributed to or appropriated by Theodor Kjerulf and Louis Agassiz.", "title": "History of research" }, { "paragraph_id": 5, "text": "During the following years, Esmark's ideas were discussed and taken over in parts by Swedish, Scottish and German scientists. At the University of Edinburgh Robert Jameson (1774–1854) seemed to be relatively open to Esmark's ideas, as reviewed by Norwegian professor of glaciology Bjørn G. Andersen (1992). Jameson's remarks about ancient glaciers in Scotland were most probably prompted by Esmark. In Germany, Albrecht Reinhard Bernhardi (1797–1849), a geologist and professor of forestry at an academy in Dreissigacker (since incorporated in the southern Thuringian city of Meiningen), adopted Esmark's theory. In a paper published in 1832, Bernhardi speculated about the polar ice caps once reaching as far as the temperate zones of the globe.", "title": "History of research" }, { "paragraph_id": 6, "text": "In Val de Bagnes, a valley in the Swiss Alps, there was a long-held local belief that the valley had once been covered deep in ice, and in 1815 a local chamois hunter called Jean-Pierre Perraudin attempted to convert the geologist Jean de Charpentier to the idea, pointing to deep striations in the rocks and giant erratic boulders as evidence. Charpentier held the general view that these signs were caused by vast floods, and he rejected Perraudin's theory as absurd. In 1818 the engineer Ignatz Venetz joined Perraudin and Charpentier to examine a proglacial lake above the valley created by an ice dam as a result of the 1815 eruption of Mount Tambora, which threatened to cause a catastrophic flood when the dam broke. Perraudin attempted unsuccessfully to convert his companions to his theory, but when the dam finally broke, there were only minor erratics and no striations, and Venetz concluded that Perraudin was right and that only ice could have caused such major results. In 1821 he read a prize-winning paper on the theory to the Swiss Society, but it was not published until Charpentier, who had also become converted, published it with his own more widely read paper in 1834.", "title": "History of research" }, { "paragraph_id": 7, "text": "In the meantime, the German botanist Karl Friedrich Schimper (1803–1867) was studying mosses which were growing on erratic boulders in the alpine upland of Bavaria. He began to wonder where such masses of stone had come from. During the summer of 1835 he made some excursions to the Bavarian Alps. Schimper came to the conclusion that ice must have been the means of transport for the boulders in the alpine upland. In the winter of 1835–36 he held some lectures in Munich. Schimper then assumed that there must have been global times of obliteration (\"Verödungszeiten\") with a cold climate and frozen water. Schimper spent the summer months of 1836 at Devens, near Bex, in the Swiss Alps with his former university friend Louis Agassiz (1801–1873) and Jean de Charpentier. Schimper, Charpentier and possibly Venetz convinced Agassiz that there had been a time of glaciation. During the winter of 1836–37, Agassiz and Schimper developed the theory of a sequence of glaciations. They mainly drew upon the preceding works of Venetz, Charpentier and on their own fieldwork. Agassiz appears to have been already familiar with Bernhardi's paper at that time. At the beginning of 1837, Schimper coined the term \"ice age\" (\"Eiszeit\") for the period of the glaciers. In July 1837 Agassiz presented their synthesis before the annual meeting of the Swiss Society for Natural Research at Neuchâtel. The audience was very critical, and some were opposed to the new theory because it contradicted the established opinions on climatic history. Most contemporary scientists thought that Earth had been gradually cooling down since its birth as a molten globe.", "title": "History of research" }, { "paragraph_id": 8, "text": "In order to persuade the skeptics, Agassiz embarked on geological fieldwork. He published his book Study on Glaciers (\"Études sur les glaciers\") in 1840. Charpentier was put out by this, as he had also been preparing a book about the glaciation of the Alps. Charpentier felt that Agassiz should have given him precedence as it was he who had introduced Agassiz to in-depth glacial research. As a result of personal quarrels, Agassiz had also omitted any mention of Schimper in his book.", "title": "History of research" }, { "paragraph_id": 9, "text": "It took several decades before the ice age theory was fully accepted by scientists. This happened on an international scale in the second half of the 1870s, following the work of James Croll, including the publication of Climate and Time, in Their Geological Relations in 1875, which provided a credible explanation for the causes of ice ages.", "title": "History of research" }, { "paragraph_id": 10, "text": "There are three main types of evidence for ice ages: geological, chemical, and paleontological.", "title": "Evidence" }, { "paragraph_id": 11, "text": "Geological evidence for ice ages comes in various forms, including rock scouring and scratching, glacial moraines, drumlins, valley cutting, and the deposition of till or tillites and glacial erratics. Successive glaciations tend to distort and erase the geological evidence for earlier glaciations, making it difficult to interpret. Furthermore, this evidence was difficult to date exactly; early theories assumed that the glacials were short compared to the long interglacials. The advent of sediment and ice cores revealed the true situation: glacials are long, interglacials short. It took some time for the current theory to be worked out.", "title": "Evidence" }, { "paragraph_id": 12, "text": "The chemical evidence mainly consists of variations in the ratios of isotopes in fossils present in sediments and sedimentary rocks and ocean sediment cores. For the most recent glacial periods, ice cores provide climate proxies, both from the ice itself and from atmospheric samples provided by included bubbles of air. Because water containing lighter isotopes has a lower heat of evaporation, its proportion decreases with warmer conditions. This allows a temperature record to be constructed. This evidence can be confounded, however, by other factors recorded by isotope ratios.", "title": "Evidence" }, { "paragraph_id": 13, "text": "The paleontological evidence consists of changes in the geographical distribution of fossils. During a glacial period, cold-adapted organisms spread into lower latitudes, and organisms that prefer warmer conditions become extinct or retreat into lower latitudes. This evidence is also difficult to interpret because it requires:", "title": "Evidence" }, { "paragraph_id": 14, "text": "Despite the difficulties, analysis of ice core and ocean sediment cores has provided a credible record of glacials and interglacials over the past few million years. These also confirm the linkage between ice ages and continental crust phenomena such as glacial moraines, drumlins, and glacial erratics. Hence the continental crust phenomena are accepted as good evidence of earlier ice ages when they are found in layers created much earlier than the time range for which ice cores and ocean sediment cores are available.", "title": "Evidence" }, { "paragraph_id": 15, "text": "There have been at least five major ice ages in Earth's history (the Huronian, Cryogenian, Andean-Saharan, late Paleozoic, and the latest Quaternary Ice Age). Outside these ages, Earth was previously thought to have been ice-free even in high latitudes; such periods are known as greenhouse periods. However, other studies dispute this, finding evidence of occasional glaciations at high latitudes even during apparent greenhouse periods.", "title": "Major ice ages" }, { "paragraph_id": 16, "text": "Rocks from the earliest well-established ice age, called the Huronian, have been dated to around 2.4 to 2.1 billion years ago during the early Proterozoic Eon. Several hundreds of kilometers of the Huronian Supergroup are exposed 10 to 100 kilometers (6 to 62 mi) north of the north shore of Lake Huron, extending from near Sault Ste. Marie to Sudbury, northeast of Lake Huron, with giant layers of now-lithified till beds, dropstones, varves, outwash, and scoured basement rocks. Correlative Huronian deposits have been found near Marquette, Michigan, and correlation has been made with Paleoproterozoic glacial deposits from Western Australia. The Huronian ice age was caused by the elimination of atmospheric methane, a greenhouse gas, during the Great Oxygenation Event.", "title": "Major ice ages" }, { "paragraph_id": 17, "text": "The next well-documented ice age, and probably the most severe of the last billion years, occurred from 720 to 630 million years ago (the Cryogenian period) and may have produced a Snowball Earth in which glacial ice sheets reached the equator, possibly being ended by the accumulation of greenhouse gases such as CO2 produced by volcanoes. \"The presence of ice on the continents and pack ice on the oceans would inhibit both silicate weathering and photosynthesis, which are the two major sinks for CO2 at present.\" It has been suggested that the end of this ice age was responsible for the subsequent Ediacaran and Cambrian explosion, though this model is recent and controversial.", "title": "Major ice ages" }, { "paragraph_id": 18, "text": "The Andean-Saharan occurred from 460 to 420 million years ago, during the Late Ordovician and the Silurian period.", "title": "Major ice ages" }, { "paragraph_id": 19, "text": "The evolution of land plants at the onset of the Devonian period caused a long term increase in planetary oxygen levels and reduction of CO2 levels, which resulted in the late Paleozoic icehouse. Its former name, the Karoo glaciation, was named after the glacial tills found in the Karoo region of South Africa. There were extensive polar ice caps at intervals from 360 to 260 million years ago in South Africa during the Carboniferous and early Permian periods. Correlatives are known from Argentina, also in the center of the ancient supercontinent Gondwanaland.", "title": "Major ice ages" }, { "paragraph_id": 20, "text": "Although the Mesozoic Era retained a greenhouse climate over its timespan and was previously assumed to have been entirely glaciation-free, more recent studies suggest that brief periods of glaciation occurred in both hemispheres during the Early Cretaceous. Geologic and palaeoclimatological records suggest the existence of glacial periods during the Valanginian, Hauterivian, and Aptian stages of the Early Cretaceous. Ice-rafted glacial dropstones indicate that in the Northern Hemisphere, ice sheets may have extended as far south as the Iberian Peninsula during the Hauterivian and Aptian. Although ice sheets largely disappeared from Earth for the rest of the period (potential reports from the Turonian, otherwise the warmest period of the Phanerozoic, are disputed), ice sheets and associated sea ice appear to have briefly returned to Antarctica near the very end of the Maastrichtian just prior to the Cretaceous-Paleogene extinction event.", "title": "Major ice ages" }, { "paragraph_id": 21, "text": "The Quaternary Glaciation / Quaternary Ice Age started about 2.58 million years ago at the beginning of the Quaternary Period when the spread of ice sheets in the Northern Hemisphere began. Since then, the world has seen cycles of glaciation with ice sheets advancing and retreating on 40,000- and 100,000-year time scales called glacial periods, glacials or glacial advances, and interglacial periods, interglacials or glacial retreats. Earth is currently in an interglacial, and the last glacial period ended about 11,700 years ago. All that remains of the continental ice sheets are the Greenland and Antarctic ice sheets and smaller glaciers such as on Baffin Island.", "title": "Major ice ages" }, { "paragraph_id": 22, "text": "The definition of the Quaternary as beginning 2.58 Ma is based on the formation of the Arctic ice cap. The Antarctic ice sheet began to form earlier, at about 34 Ma, in the mid-Cenozoic (Eocene-Oligocene Boundary). The term Late Cenozoic Ice Age is used to include this early phase.", "title": "Major ice ages" }, { "paragraph_id": 23, "text": "Ice ages can be further divided by location and time; for example, the names Riss (180,000–130,000 years bp) and Würm (70,000–10,000 years bp) refer specifically to glaciation in the Alpine region. The maximum extent of the ice is not maintained for the full interval. The scouring action of each glaciation tends to remove most of the evidence of prior ice sheets almost completely, except in regions where the later sheet does not achieve full coverage.", "title": "Major ice ages" }, { "paragraph_id": 24, "text": "Within the current glaciation, more temperate and more severe periods have occurred. The colder periods are called glacial periods, the warmer periods interglacials, such as the Eemian Stage. There is evidence that similar glacial cycles occurred in previous glaciations, including the Andean-Saharan and the late Paleozoic ice house. The glacial cycles of the late Paleozoic ice house are likely responsible for the deposition of cyclothems.", "title": "Glacials and interglacials" }, { "paragraph_id": 25, "text": "Glacials are characterized by cooler and drier climates over most of Earth and large land and sea ice masses extending outward from the poles. Mountain glaciers in otherwise unglaciated areas extend to lower elevations due to a lower snow line. Sea levels drop due to the removal of large volumes of water above sea level in the icecaps. There is evidence that ocean circulation patterns are disrupted by glaciations. The glacials and interglacials coincide with changes in orbital forcing of climate due to Milankovitch cycles, which are periodic changes in Earth's orbit and the tilt of Earth's rotational axis.", "title": "Glacials and interglacials" }, { "paragraph_id": 26, "text": "Earth has been in an interglacial period known as the Holocene for around 11,700 years, and an article in Nature in 2004 argues that it might be most analogous to a previous interglacial that lasted 28,000 years. Predicted changes in orbital forcing suggest that the next glacial period would begin at least 50,000 years from now. Moreover, anthropogenic forcing from increased greenhouse gases is estimated to potentially outweigh the orbital forcing of the Milankovitch cycles for hundreds of thousands of years.", "title": "Glacials and interglacials" }, { "paragraph_id": 27, "text": "Each glacial period is subject to positive feedback which makes it more severe, and negative feedback which mitigates and (in all cases so far) eventually ends it.", "title": "Feedback processes" }, { "paragraph_id": 28, "text": "An important form of feedback is provided by Earth's albedo, which is how much of the sun's energy is reflected rather than absorbed by Earth. Ice and snow increase Earth's albedo, while forests reduce its albedo. When the air temperature decreases, ice and snow fields grow, and they reduce forest cover. This continues until competition with a negative feedback mechanism forces the system to an equilibrium.", "title": "Feedback processes" }, { "paragraph_id": 29, "text": "One theory is that when glaciers form, two things happen: the ice grinds rocks into dust, and the land becomes dry and arid. This allows winds to transport iron rich dust into the open ocean, where it acts as a fertilizer that causes massive algal blooms that pulls large amounts of CO2 out of the atmosphere. This in turn makes it even colder and causes the glaciers to grow more.", "title": "Feedback processes" }, { "paragraph_id": 30, "text": "In 1956, Ewing and Donn hypothesized that an ice-free Arctic Ocean leads to increased snowfall at high latitudes. When low-temperature ice covers the Arctic Ocean there is little evaporation or sublimation and the polar regions are quite dry in terms of precipitation, comparable to the amount found in mid-latitude deserts. This low precipitation allows high-latitude snowfalls to melt during the summer. An ice-free Arctic Ocean absorbs solar radiation during the long summer days, and evaporates more water into the Arctic atmosphere. With higher precipitation, portions of this snow may not melt during the summer and so glacial ice can form at lower altitudes and more southerly latitudes, reducing the temperatures over land by increased albedo as noted above. Furthermore, under this hypothesis the lack of oceanic pack ice allows increased exchange of waters between the Arctic and the North Atlantic Oceans, warming the Arctic and cooling the North Atlantic. (Current projected consequences of global warming include a brief ice-free Arctic Ocean period by 2050.) Additional fresh water flowing into the North Atlantic during a warming cycle may also reduce the global ocean water circulation. Such a reduction (by reducing the effects of the Gulf Stream) would have a cooling effect on northern Europe, which in turn would lead to increased low-latitude snow retention during the summer. It has also been suggested that during an extensive glacial, glaciers may move through the Gulf of Saint Lawrence, extending into the North Atlantic Ocean far enough to block the Gulf Stream.", "title": "Feedback processes" }, { "paragraph_id": 31, "text": "Ice sheets that form during glaciations erode the land beneath them. This can reduce the land area above sea level and thus diminish the amount of space on which ice sheets can form. This mitigates the albedo feedback, as does the rise in sea level that accompanies the reduced area of ice sheets, since open ocean has a lower albedo than land.", "title": "Feedback processes" }, { "paragraph_id": 32, "text": "Another negative feedback mechanism is the increased aridity occurring with glacial maxima, which reduces the precipitation available to maintain glaciation. The glacial retreat induced by this or any other process can be amplified by similar inverse positive feedbacks as for glacial advances.", "title": "Feedback processes" }, { "paragraph_id": 33, "text": "According to research published in Nature Geoscience, human emissions of carbon dioxide (CO2) will defer the next glacial period. Researchers used data on Earth's orbit to find the historical warm interglacial period that looks most like the current one and from this have predicted that the next glacial period would usually begin within 1,500 years. They go on to predict that emissions have been so high that it will not.", "title": "Feedback processes" }, { "paragraph_id": 34, "text": "The causes of ice ages are not fully understood for either the large-scale ice age periods or the smaller ebb and flow of glacial–interglacial periods within an ice age. The consensus is that several factors are important: atmospheric composition, such as the concentrations of carbon dioxide and methane (the specific levels of the previously mentioned gases are now able to be seen with the new ice core samples from the European Project for Ice Coring in Antarctica (EPICA) Dome C in Antarctica over the past 800,000 years); changes in Earth's orbit around the Sun known as Milankovitch cycles; the motion of tectonic plates resulting in changes in the relative location and amount of continental and oceanic crust on Earth's surface, which affect wind and ocean currents; variations in solar output; the orbital dynamics of the Earth–Moon system; the impact of relatively large meteorites and volcanism including eruptions of supervolcanoes.", "title": "Causes" }, { "paragraph_id": 35, "text": "Some of these factors influence each other. For example, changes in Earth's atmospheric composition (especially the concentrations of greenhouse gases) may alter the climate, while climate change itself can change the atmospheric composition (for example by changing the rate at which weathering removes CO2).", "title": "Causes" }, { "paragraph_id": 36, "text": "Maureen Raymo, William Ruddiman and others propose that the Tibetan and Colorado Plateaus are immense CO2 \"scrubbers\" with a capacity to remove enough CO2 from the global atmosphere to be a significant causal factor of the 40 million year Cenozoic Cooling trend. They further claim that approximately half of their uplift (and CO2 \"scrubbing\" capacity) occurred in the past 10 million years.", "title": "Causes" }, { "paragraph_id": 37, "text": "There is evidence that greenhouse gas levels fell at the start of ice ages and rose during the retreat of the ice sheets, but it is difficult to establish cause and effect (see the notes above on the role of weathering). Greenhouse gas levels may also have been affected by other factors which have been proposed as causes of ice ages, such as the movement of continents and volcanism.", "title": "Causes" }, { "paragraph_id": 38, "text": "The Snowball Earth hypothesis maintains that the severe freezing in the late Proterozoic was ended by an increase in CO2 levels in the atmosphere, mainly from volcanoes, and some supporters of Snowball Earth argue that it was caused in the first place by a reduction in atmospheric CO2. The hypothesis also warns of future Snowball Earths.", "title": "Causes" }, { "paragraph_id": 39, "text": "In 2009, further evidence was provided that changes in solar insolation provide the initial trigger for Earth to warm after an Ice Age, with secondary factors like increases in greenhouse gases accounting for the magnitude of the change.", "title": "Causes" }, { "paragraph_id": 40, "text": "The geological record appears to show that ice ages start when the continents are in positions which block or reduce the flow of warm water from the equator to the poles and thus allow ice sheets to form. The ice sheets increase Earth's reflectivity and thus reduce the absorption of solar radiation. With less radiation absorbed the atmosphere cools; the cooling allows the ice sheets to grow, which further increases reflectivity in a positive feedback loop. The ice age continues until the reduction in weathering causes an increase in the greenhouse effect.", "title": "Causes" }, { "paragraph_id": 41, "text": "There are three main contributors from the layout of the continents that obstruct the movement of warm water to the poles:", "title": "Causes" }, { "paragraph_id": 42, "text": "Since today's Earth has a continent over the South Pole and an almost land-locked ocean over the North Pole, geologists believe that Earth will continue to experience glacial periods in the geologically near future.", "title": "Causes" }, { "paragraph_id": 43, "text": "Some scientists believe that the Himalayas are a major factor in the current ice age, because these mountains have increased Earth's total rainfall and therefore the rate at which carbon dioxide is washed out of the atmosphere, decreasing the greenhouse effect. The Himalayas' formation started about 70 million years ago when the Indo-Australian Plate collided with the Eurasian Plate, and the Himalayas are still rising by about 5 mm per year because the Indo-Australian plate is still moving at 67 mm/year. The history of the Himalayas broadly fits the long-term decrease in Earth's average temperature since the mid-Eocene, 40 million years ago.", "title": "Causes" }, { "paragraph_id": 44, "text": "Another important contribution to ancient climate regimes is the variation of ocean currents, which are modified by continent position, sea levels and salinity, as well as other factors. They have the ability to cool (e.g. aiding the creation of Antarctic ice) and the ability to warm (e.g. giving the British Isles a temperate as opposed to a boreal climate). The closing of the Isthmus of Panama about 3 million years ago may have ushered in the present period of strong glaciation over North America by ending the exchange of water between the tropical Atlantic and Pacific Oceans.", "title": "Causes" }, { "paragraph_id": 45, "text": "Analyses suggest that ocean current fluctuations can adequately account for recent glacial oscillations. During the last glacial period the sea-level has fluctuated 20–30 m as water was sequestered, primarily in the Northern Hemisphere ice sheets. When ice collected and the sea level dropped sufficiently, flow through the Bering Strait (the narrow strait between Siberia and Alaska is about 50 m deep today) was reduced, resulting in increased flow from the North Atlantic. This realigned the thermohaline circulation in the Atlantic, increasing heat transport into the Arctic, which melted the polar ice accumulation and reduced other continental ice sheets. The release of water raised sea levels again, restoring the ingress of colder water from the Pacific with an accompanying shift to northern hemisphere ice accumulation.", "title": "Causes" }, { "paragraph_id": 46, "text": "According to a study published in Nature in 2021, all glacial periods of ice ages over the last 1.5 million years were associated with northward shifts of melting Antarctic icebergs which changed ocean circulation patterns, leading to more CO2 being pulled out of the atmosphere. The authors suggest that this process may be disrupted in the future as the Southern Ocean will become too warm for the icebergs to travel far enough to trigger these changes.", "title": "Causes" }, { "paragraph_id": 47, "text": "Matthias Kuhle's geological theory of Ice Age development was suggested by the existence of an ice sheet covering the Tibetan Plateau during the Ice Ages (Last Glacial Maximum?). According to Kuhle, the plate-tectonic uplift of Tibet past the snow-line has led to a surface of c. 2,400,000 square kilometres (930,000 sq mi) changing from bare land to ice with a 70% greater albedo. The reflection of energy into space resulted in a global cooling, triggering the Pleistocene Ice Age. Because this highland is at a subtropical latitude, with 4 to 5 times the insolation of high-latitude areas, what would be Earth's strongest heating surface has turned into a cooling surface.", "title": "Causes" }, { "paragraph_id": 48, "text": "Kuhle explains the interglacial periods by the 100,000-year cycle of radiation changes due to variations in Earth's orbit. This comparatively insignificant warming, when combined with the lowering of the Nordic inland ice areas and Tibet due to the weight of the superimposed ice-load, has led to the repeated complete thawing of the inland ice areas.", "title": "Causes" }, { "paragraph_id": 49, "text": "The Milankovitch cycles are a set of cyclic variations in characteristics of Earth's orbit around the Sun. Each cycle has a different length, so at some times their effects reinforce each other and at other times they (partially) cancel each other.", "title": "Causes" }, { "paragraph_id": 50, "text": "There is strong evidence that the Milankovitch cycles affect the occurrence of glacial and interglacial periods within an ice age. The present ice age is the most studied and best understood, particularly the last 400,000 years, since this is the period covered by ice cores that record atmospheric composition and proxies for temperature and ice volume. Within this period, the match of glacial/interglacial frequencies to the Milanković orbital forcing periods is so close that orbital forcing is generally accepted. The combined effects of the changing distance to the Sun, the precession of Earth's axis, and the changing tilt of Earth's axis redistribute the sunlight received by Earth. Of particular importance are changes in the tilt of Earth's axis, which affect the intensity of seasons. For example, the amount of solar influx in July at 65 degrees north latitude varies by as much as 22% (from 450 W/m to 550 W/m). It is widely believed that ice sheets advance when summers become too cool to melt all of the accumulated snowfall from the previous winter. Some believe that the strength of the orbital forcing is too small to trigger glaciations, but feedback mechanisms like CO2 may explain this mismatch.", "title": "Causes" }, { "paragraph_id": 51, "text": "While Milankovitch forcing predicts that cyclic changes in Earth's orbital elements can be expressed in the glaciation record, additional explanations are necessary to explain which cycles are observed to be most important in the timing of glacial–interglacial periods. In particular, during the last 800,000 years, the dominant period of glacial–interglacial oscillation has been 100,000 years, which corresponds to changes in Earth's orbital eccentricity and orbital inclination. Yet this is by far the weakest of the three frequencies predicted by Milankovitch. During the period 3.0–0.8 million years ago, the dominant pattern of glaciation corresponded to the 41,000-year period of changes in Earth's obliquity (tilt of the axis). The reasons for dominance of one frequency versus another are poorly understood and an active area of current research, but the answer probably relates to some form of resonance in Earth's climate system. Recent work suggests that the 100K year cycle dominates due to increased southern-pole sea-ice increasing total solar reflectivity.", "title": "Causes" }, { "paragraph_id": 52, "text": "The \"traditional\" Milankovitch explanation struggles to explain the dominance of the 100,000-year cycle over the last 8 cycles. Richard A. Muller, Gordon J. F. MacDonald, and others have pointed out that those calculations are for a two-dimensional orbit of Earth but the three-dimensional orbit also has a 100,000-year cycle of orbital inclination. They proposed that these variations in orbital inclination lead to variations in insolation, as Earth moves in and out of known dust bands in the solar system. Although this is a different mechanism to the traditional view, the \"predicted\" periods over the last 400,000 years are nearly the same. The Muller and MacDonald theory, in turn, has been challenged by Jose Antonio Rial.", "title": "Causes" }, { "paragraph_id": 53, "text": "Another worker, William Ruddiman, has suggested a model that explains the 100,000-year cycle by the modulating effect of eccentricity (weak 100,000-year cycle) on precession (26,000-year cycle) combined with greenhouse gas feedbacks in the 41,000- and 26,000-year cycles. Yet another theory has been advanced by Peter Huybers who argued that the 41,000-year cycle has always been dominant, but that Earth has entered a mode of climate behavior where only the second or third cycle triggers an ice age. This would imply that the 100,000-year periodicity is really an illusion created by averaging together cycles lasting 80,000 and 120,000 years. This theory is consistent with a simple empirical multi-state model proposed by Didier Paillard. Paillard suggests that the late Pleistocene glacial cycles can be seen as jumps between three quasi-stable climate states. The jumps are induced by the orbital forcing, while in the early Pleistocene the 41,000-year glacial cycles resulted from jumps between only two climate states. A dynamical model explaining this behavior was proposed by Peter Ditlevsen. This is in support of the suggestion that the late Pleistocene glacial cycles are not due to the weak 100,000-year eccentricity cycle, but a non-linear response to mainly the 41,000-year obliquity cycle.", "title": "Causes" }, { "paragraph_id": 54, "text": "There are at least two types of variation in the Sun's energy output:", "title": "Causes" }, { "paragraph_id": 55, "text": "The long-term increase in the Sun's output cannot be a cause of ice ages.", "title": "Causes" }, { "paragraph_id": 56, "text": "Volcanic eruptions may have contributed to the inception and/or the end of ice age periods. At times during the paleoclimate, carbon dioxide levels were two or three times greater than today. Volcanoes and movements in continental plates contributed to high amounts of CO2 in the atmosphere. Carbon dioxide from volcanoes probably contributed to periods with highest overall temperatures. One suggested explanation of the Paleocene–Eocene Thermal Maximum is that undersea volcanoes released methane from clathrates and thus caused a large and rapid increase in the greenhouse effect. There appears to be no geological evidence for such eruptions at the right time, but this does not prove they did not happen.", "title": "Causes" }, { "paragraph_id": 57, "text": "The current geological period, the Quaternary, which began about 2.6 million years ago and extends into the present, is marked by warm and cold episodes, cold phases called glacials (Quaternary ice age) lasting about 100,000 years, and which are then interrupted by the warmer interglacials which lasted about 10,000–15,000 years. The last cold episode of the Last Glacial Period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 58, "text": "The major glacial stages of the current ice age in North America are the Illinoian, Eemian, and Wisconsin glaciation. The use of the Nebraskan, Afton, Kansan, and Yarmouthian stages to subdivide the ice age in North America has been discontinued by Quaternary geologists and geomorphologists. These stages have all been merged into the Pre-Illinoian in the 1980s.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 59, "text": "During the most recent North American glaciation, during the latter part of the Last Glacial Maximum (26,000 to 13,300 years ago), ice sheets extended to about 45th parallel north. These sheets were 3 to 4 kilometres (1.9 to 2.5 mi) thick.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 60, "text": "This Wisconsin glaciation left widespread impacts on the North American landscape. The Great Lakes and the Finger Lakes were carved by ice deepening old valleys. Most of the lakes in Minnesota and Wisconsin were gouged out by glaciers and later filled with glacial meltwaters. The old Teays River drainage system was radically altered and largely reshaped into the Ohio River drainage system. Other rivers were dammed and diverted to new channels, such as Niagara Falls, which formed a dramatic waterfall and gorge, when the waterflow encountered a limestone escarpment. Another similar waterfall, at the present Clark Reservation State Park near Syracuse, New York, is now dry.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 61, "text": "The area from Long Island to Nantucket, Massachusetts was formed from glacial till, and the plethora of lakes on the Canadian Shield in northern Canada can be almost entirely attributed to the action of the ice. As the ice retreated and the rock dust dried, winds carried the material hundreds of miles, forming beds of loess many dozens of feet thick in the Missouri Valley. Post-glacial rebound continues to reshape the Great Lakes and other areas formerly under the weight of the ice sheets.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 62, "text": "The Driftless Area, a portion of western and southwestern Wisconsin along with parts of adjacent Minnesota, Iowa, and Illinois, was not covered by glaciers.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 63, "text": "A specially interesting climatic change during glacial times has taken place in the semi-arid Andes. Beside the expected cooling down in comparison with the current climate, a significant precipitation change happened here. So, researches in the presently semiarid subtropic Aconcagua-massif (6,962 m) have shown an unexpectedly extensive glacial glaciation of the type \"ice stream network\". The connected valley glaciers exceeding 100 km in length, flowed down on the East-side of this section of the Andes at 32–34°S and 69–71°W as far as a height of 2,060 m and on the western luff-side still clearly deeper. Where current glaciers scarcely reach 10 km in length, the snowline (ELA) runs at a height of 4,600 m and at that time was lowered to 3,200 m asl, i.e. about 1,400 m. From this follows that—beside of an annual depression of temperature about c. 8.4 °C— here was an increase in precipitation. Accordingly, at glacial times the humid climatic belt that today is situated several latitude degrees further to the S, was shifted much further to the N.", "title": "Recent glacial and interglacial phases" }, { "paragraph_id": 64, "text": "Although the last glacial period ended more than 8,000 years ago, its effects can still be felt today. For example, the moving ice carved out the landscape in Canada (See Canadian Arctic Archipelago), Greenland, northern Eurasia and Antarctica. The erratic boulders, till, drumlins, eskers, fjords, kettle lakes, moraines, cirques, horns, etc., are typical features left behind by the glaciers. The weight of the ice sheets was so great that they deformed Earth's crust and mantle. After the ice sheets melted, the ice-covered land rebounded. Due to the high viscosity of Earth's mantle, the flow of mantle rocks which controls the rebound process is very slow—at a rate of about 1 cm/year near the center of rebound area today.", "title": "Effects of glaciation" }, { "paragraph_id": 65, "text": "During glaciation, water was taken from the oceans to form the ice at high latitudes, thus global sea level dropped by about 110 meters, exposing the continental shelves and forming land-bridges between land-masses for animals to migrate. During deglaciation, the melted ice-water returned to the oceans, causing sea level to rise. This process can cause sudden shifts in coastlines and hydration systems resulting in newly submerged lands, emerging lands, collapsed ice dams resulting in salination of lakes, new ice dams creating vast areas of freshwater, and a general alteration in regional weather patterns on a large but temporary scale. It can even cause temporary reglaciation. This type of chaotic pattern of rapidly changing land, ice, saltwater and freshwater has been proposed as the likely model for the Baltic and Scandinavian regions, as well as much of central North America at the end of the last glacial maximum, with the present-day coastlines only being achieved in the last few millennia of prehistory. Also, the effect of elevation on Scandinavia submerged a vast continental plain that had existed under much of what is now the North Sea, connecting the British Isles to Continental Europe.", "title": "Effects of glaciation" }, { "paragraph_id": 66, "text": "The redistribution of ice-water on the surface of Earth and the flow of mantle rocks causes changes in the gravitational field as well as changes to the distribution of the moment of inertia of Earth. These changes to the moment of inertia result in a change in the angular velocity, axis, and wobble of Earth's rotation.", "title": "Effects of glaciation" }, { "paragraph_id": 67, "text": "The weight of the redistributed surface mass loaded the lithosphere, caused it to flex and also induced stress within Earth. The presence of the glaciers generally suppressed the movement of faults below. During deglaciation, the faults experience accelerated slip triggering earthquakes. Earthquakes triggered near the ice margin may in turn accelerate ice calving and may account for the Heinrich events. As more ice is removed near the ice margin, more intraplate earthquakes are induced and this positive feedback may explain the fast collapse of ice sheets.", "title": "Effects of glaciation" }, { "paragraph_id": 68, "text": "In Europe, glacial erosion and isostatic sinking from weight of ice made the Baltic Sea, which before the Ice Age was all land drained by the Eridanos River.", "title": "Effects of glaciation" }, { "paragraph_id": 69, "text": "", "title": "Effects of glaciation" }, { "paragraph_id": 70, "text": "A 2015 report by the Past Global Changes Project says simulations show that a new glaciation is unlikely to happen within the next approximately 50,000 years, before the next strong drop in Northern Hemisphere summer insolation occurs \"if either atmospheric CO2 concentration remains above 300 ppm or cumulative carbon emissions exceed 1000 Pg C\" (i.e. 1,000 gigatonnes carbon). \"Only for an atmospheric CO2 content below the preindustrial level may a glaciation occur within the next 10 ka. ... Given the continued anthropogenic CO2 emissions, glacial inception is very unlikely to occur in the next 50 ka, because the timescale for CO2 and temperature reduction toward unperturbed values in the absence of active removal is very long [IPCC, 2013], and only weak precessional forcing occurs in the next two precessional cycles.\" (A precessional cycle is around 21,000 years, the time it takes for the perihelion to move all the way around the tropical year.)", "title": "Future ice ages" }, { "paragraph_id": 71, "text": "Ice ages go through cycles of about 100,000 years, but the next one may well be avoided due to our carbon dioxide emissions.", "title": "Future ice ages" } ]
An ice age is a long period of reduction in the temperature of Earth's surface and atmosphere, resulting in the presence or expansion of continental and polar ice sheets and alpine glaciers. Earth's climate alternates between ice ages and greenhouse periods, during which there are no glaciers on the planet. Earth is currently in the ice age called Quaternary glaciation. Individual pulses of cold climate within an ice age are termed glacial periods, and intermittent warm periods within an ice age are called interglacials or interstadials. In glaciology, ice age implies the presence of extensive ice sheets in the northern and southern hemispheres. By this definition, Earth is in an interglacial period—the Holocene. The amount of anthropogenic greenhouse gases emitted into Earth's oceans and atmosphere is projected to delay the next glacial period, which otherwise would begin in around 50,000 years, by between 100,000 and 500,000 years.
2001-08-20T14:29:13Z
2023-12-26T21:52:41Z
[ "Template:Cite book", "Template:Greenhouse and Icehouse Earth", "Template:CO2", "Template:Pp-semi-indef", "Template:Citation needed", "Template:Div col", "Template:Harv", "Template:Portal bar", "Template:Short description", "Template:Annotated link", "Template:Div col end", "Template:Reflist", "Template:Cite journal", "Template:Wikibooks", "Template:See also", "Template:Pp-move", "Template:Convert", "Template:By whom", "Template:Main", "Template:Cite web", "Template:Cite news", "Template:About", "Template:NSRW poster", "Template:Webarchive", "Template:For timeline", "Template:ISBN", "Template:Citation", "Template:Dead link", "Template:Cite magazine", "Template:Harvnb", "Template:Commons category", "Template:Ice ages", "Template:Continental Glaciations", "Template:Authority control", "Template:Multiple image" ]
https://en.wikipedia.org/wiki/Ice_age
15,362
Irving Langmuir
Irving Langmuir (/ˈlæŋmjʊər/; January 31, 1881 – August 16, 1957) was an American chemist, physicist, and engineer. He was awarded the Nobel Prize in Chemistry in 1932 for his work in surface chemistry. Langmuir's most famous publication is the 1919 article "The Arrangement of Electrons in Atoms and Molecules" in which, building on Gilbert N. Lewis's cubical atom theory and Walther Kossel's chemical bonding theory, he outlined his "concentric theory of atomic structure". Langmuir became embroiled in a priority dispute with Lewis over this work; Langmuir's presentation skills were largely responsible for the popularization of the theory, although the credit for the theory itself belongs mostly to Lewis. While at General Electric from 1909 to 1950, Langmuir advanced several fields of physics and chemistry, inventing the gas-filled incandescent lamp and the hydrogen welding technique. The Langmuir Laboratory for Atmospheric Research near Socorro, New Mexico, was named in his honor, as was the American Chemical Society journal for surface science called Langmuir. Irving Langmuir was born in Brooklyn, New York, on January 31, 1881. He was the third of the four children of Charles Langmuir and Sadie, née Comings. During his childhood, Langmuir's parents encouraged him to carefully observe nature and to keep a detailed record of his various observations. When Irving was eleven, it was discovered that he had poor eyesight. When this problem was corrected, details that had previously eluded him were revealed, and his interest in the complications of nature was heightened. During his childhood, Langmuir was influenced by his older brother, Arthur Langmuir. Arthur was a research chemist who encouraged Irving to be curious about nature and how things work. Arthur helped Irving set up his first chemistry lab in the corner of his bedroom, and he was content to answer the myriad questions that Irving would pose. Langmuir's hobbies included mountaineering, skiing, piloting his own plane, and classical music. In addition to his professional interest in the politics of atomic energy, he was concerned about wilderness conservation. Langmuir attended several schools and institutes in America and Paris (1892–1895) before graduating high school from Chestnut Hill Academy (1898), an elite private school located in the affluent Chestnut Hill area in Philadelphia. He graduated with a Bachelor of Science degree in metallurgical engineering (Met.E.) from the Columbia University School of Mines in 1903. He earned his PhD in 1906 under Friedrich Dolezalek in Göttingen, for research done using the "Nernst glower", an electric lamp invented by Nernst. His doctoral thesis was entitled "On the Partial Recombination of Dissolved Gases During Cooling." He later did postgraduate work in chemistry. Langmuir then taught at Stevens Institute of Technology in Hoboken, New Jersey, until 1909, when he began working at the General Electric research laboratory (Schenectady, New York). His initial contributions to science came from his study of light bulbs (a continuation of his PhD work). His first major development was the improvement of the diffusion pump, which ultimately led to the invention of the high-vacuum rectifier and amplifier tubes. A year later, he and colleague Lewi Tonks discovered that the lifetime of a tungsten filament could be greatly lengthened by filling the bulb with an inert gas, such as argon, the critical factor (overlooked by other researchers) being the need for extreme cleanliness in all stages of the process. He also discovered that twisting the filament into a tight coil improved its efficiency. These were important developments in the history of the incandescent light bulb. His work in surface chemistry began at this point, when he discovered that molecular hydrogen introduced into a tungsten-filament bulb dissociated into atomic hydrogen and formed a layer one atom thick on the surface of the bulb. His assistant in vacuum tube research was his cousin William Comings White. As he continued to study filaments in vacuum and different gas environments, he began to study the emission of charged particles from hot filaments (thermionic emission). He was one of the first scientists to work with plasmas, and he was the first to call these ionized gases by that name because they reminded him of blood plasma. Langmuir and Tonks discovered electron density waves in plasmas that are now known as Langmuir waves. He introduced the concept of electron temperature and in 1924 invented the diagnostic method for measuring both temperature and density with an electrostatic probe, now called a Langmuir probe and commonly used in plasma physics. The current of a biased probe tip is measured as a function of bias voltage to determine the local plasma temperature and density. He also discovered atomic hydrogen, which he put to use by inventing the atomic hydrogen welding process; the first plasma weld ever made. Plasma welding has since been developed into gas tungsten arc welding. In 1917, he published a paper on the chemistry of oil films that later became the basis for the award of the 1932 Nobel Prize in chemistry. Langmuir theorized that oils consisting of an aliphatic chain with a hydrophilic end group (perhaps an alcohol or acid) were oriented as a film one molecule thick upon the surface of water, with the hydrophilic group down in the water and the hydrophobic chains clumped together on the surface. The thickness of the film could be easily determined from the known volume and area of the oil, which allowed investigation of the molecular configuration before spectroscopic techniques were available. Following World War I Langmuir contributed to atomic theory and the understanding of atomic structure by defining the modern concept of valence shells and isotopes. Langmuir was president of the Institute of Radio Engineers in 1923. Based on his work at General Electric, John B. Taylor developed a detector ionizing beams of alkali metals, called nowadays the Langmuir-Taylor detector. In 1927, he was one of the participants of the fifth Solvay Conference on Physics that took place at the International Solvay Institute for Physics in Belgium. He joined Katharine B. Blodgett to study thin films and surface adsorption. They introduced the concept of a monolayer (a layer of material one molecule thick) and the two-dimensional physics which describe such a surface. In 1932 he received the Nobel Prize in Chemistry "for his discoveries and investigations in surface chemistry." In 1938, Langmuir's scientific interests began to turn to atmospheric science and meteorology. One of his first ventures, although tangentially related, was a refutation of the claim of entomologist Charles H. T. Townsend that the deer botfly flew at speeds of over 800 miles per hour. Langmuir estimated the fly's speed at 25 miles per hour. After observing windrows of drifting seaweed in the Sargasso Sea he discovered a wind-driven surface circulation in the sea. It is now called the Langmuir circulation. During World War II, Langmuir and Research Associate Vincent J Schaefer worked on improving naval sonar for submarine detection, and later to develop protective smoke screens and methods for deicing aircraft wings. This research led him to theorize and then demonstrate in the laboratory and in the atmosphere, that the introduction of ice nuclei dry ice and silver iodide into a sufficiently moist cloud of low temperature (supercooled water) could induce precipitation (cloud seeding); though in frequent practice, particularly in Australia and the People's Republic of China, the efficiency of this technique remains controversial today. In 1953 Langmuir coined the term "pathological science", describing research conducted with accordance to the scientific method, but tainted by unconscious bias or subjective effects. This is in contrast to pseudoscience, which has no pretense of following the scientific method. In his original speech, he presented ESP and flying saucers as examples of pathological science; since then, the label has been applied to polywater and cold fusion. His house in Schenectady, was designated a National Historic Landmark in 1976. Langmuir was married to Marion Mersereau (1883–1971) in 1912 with whom he adopted two children: Kenneth and Barbara. After a short illness, he died in Woods Hole, Massachusetts from a heart attack on August 16, 1957. His obituary ran on the front page of The New York Times. On his religious views, Langmuir was an agnostic. According to author Kurt Vonnegut, Langmuir was the inspiration for his fictional scientist Dr. Felix Hoenikker in the novel Cat's Cradle. The character's invention of ice-nine eventually destroyed the world by seeding a new phase of ice water (similar in name only to Ice IX). Langmuir had worked with Vonnegut's brother, Bernard Vonnegut at General Electric on seeding ice crystals to diminish or increase rain or storms. <ref></Serendipity in Science: Twenty Years at Langmuir University An Autobiography by Vincent J Schaefer, ScD; Compiled and Edited Don Rittner 2013 Circle Square Press, Voorheesville, NY <ref>
[ { "paragraph_id": 0, "text": "Irving Langmuir (/ˈlæŋmjʊər/; January 31, 1881 – August 16, 1957) was an American chemist, physicist, and engineer. He was awarded the Nobel Prize in Chemistry in 1932 for his work in surface chemistry.", "title": "" }, { "paragraph_id": 1, "text": "Langmuir's most famous publication is the 1919 article \"The Arrangement of Electrons in Atoms and Molecules\" in which, building on Gilbert N. Lewis's cubical atom theory and Walther Kossel's chemical bonding theory, he outlined his \"concentric theory of atomic structure\". Langmuir became embroiled in a priority dispute with Lewis over this work; Langmuir's presentation skills were largely responsible for the popularization of the theory, although the credit for the theory itself belongs mostly to Lewis. While at General Electric from 1909 to 1950, Langmuir advanced several fields of physics and chemistry, inventing the gas-filled incandescent lamp and the hydrogen welding technique. The Langmuir Laboratory for Atmospheric Research near Socorro, New Mexico, was named in his honor, as was the American Chemical Society journal for surface science called Langmuir.", "title": "" }, { "paragraph_id": 2, "text": "Irving Langmuir was born in Brooklyn, New York, on January 31, 1881. He was the third of the four children of Charles Langmuir and Sadie, née Comings. During his childhood, Langmuir's parents encouraged him to carefully observe nature and to keep a detailed record of his various observations. When Irving was eleven, it was discovered that he had poor eyesight. When this problem was corrected, details that had previously eluded him were revealed, and his interest in the complications of nature was heightened.", "title": "Biography" }, { "paragraph_id": 3, "text": "During his childhood, Langmuir was influenced by his older brother, Arthur Langmuir. Arthur was a research chemist who encouraged Irving to be curious about nature and how things work. Arthur helped Irving set up his first chemistry lab in the corner of his bedroom, and he was content to answer the myriad questions that Irving would pose. Langmuir's hobbies included mountaineering, skiing, piloting his own plane, and classical music. In addition to his professional interest in the politics of atomic energy, he was concerned about wilderness conservation.", "title": "Biography" }, { "paragraph_id": 4, "text": "Langmuir attended several schools and institutes in America and Paris (1892–1895) before graduating high school from Chestnut Hill Academy (1898), an elite private school located in the affluent Chestnut Hill area in Philadelphia. He graduated with a Bachelor of Science degree in metallurgical engineering (Met.E.) from the Columbia University School of Mines in 1903. He earned his PhD in 1906 under Friedrich Dolezalek in Göttingen, for research done using the \"Nernst glower\", an electric lamp invented by Nernst. His doctoral thesis was entitled \"On the Partial Recombination of Dissolved Gases During Cooling.\" He later did postgraduate work in chemistry. Langmuir then taught at Stevens Institute of Technology in Hoboken, New Jersey, until 1909, when he began working at the General Electric research laboratory (Schenectady, New York).", "title": "Biography" }, { "paragraph_id": 5, "text": "His initial contributions to science came from his study of light bulbs (a continuation of his PhD work). His first major development was the improvement of the diffusion pump, which ultimately led to the invention of the high-vacuum rectifier and amplifier tubes. A year later, he and colleague Lewi Tonks discovered that the lifetime of a tungsten filament could be greatly lengthened by filling the bulb with an inert gas, such as argon, the critical factor (overlooked by other researchers) being the need for extreme cleanliness in all stages of the process. He also discovered that twisting the filament into a tight coil improved its efficiency. These were important developments in the history of the incandescent light bulb. His work in surface chemistry began at this point, when he discovered that molecular hydrogen introduced into a tungsten-filament bulb dissociated into atomic hydrogen and formed a layer one atom thick on the surface of the bulb.", "title": "Biography" }, { "paragraph_id": 6, "text": "His assistant in vacuum tube research was his cousin William Comings White.", "title": "Biography" }, { "paragraph_id": 7, "text": "As he continued to study filaments in vacuum and different gas environments, he began to study the emission of charged particles from hot filaments (thermionic emission). He was one of the first scientists to work with plasmas, and he was the first to call these ionized gases by that name because they reminded him of blood plasma. Langmuir and Tonks discovered electron density waves in plasmas that are now known as Langmuir waves.", "title": "Biography" }, { "paragraph_id": 8, "text": "He introduced the concept of electron temperature and in 1924 invented the diagnostic method for measuring both temperature and density with an electrostatic probe, now called a Langmuir probe and commonly used in plasma physics. The current of a biased probe tip is measured as a function of bias voltage to determine the local plasma temperature and density. He also discovered atomic hydrogen, which he put to use by inventing the atomic hydrogen welding process; the first plasma weld ever made. Plasma welding has since been developed into gas tungsten arc welding.", "title": "Biography" }, { "paragraph_id": 9, "text": "In 1917, he published a paper on the chemistry of oil films that later became the basis for the award of the 1932 Nobel Prize in chemistry. Langmuir theorized that oils consisting of an aliphatic chain with a hydrophilic end group (perhaps an alcohol or acid) were oriented as a film one molecule thick upon the surface of water, with the hydrophilic group down in the water and the hydrophobic chains clumped together on the surface. The thickness of the film could be easily determined from the known volume and area of the oil, which allowed investigation of the molecular configuration before spectroscopic techniques were available.", "title": "Biography" }, { "paragraph_id": 10, "text": "Following World War I Langmuir contributed to atomic theory and the understanding of atomic structure by defining the modern concept of valence shells and isotopes.", "title": "Biography" }, { "paragraph_id": 11, "text": "Langmuir was president of the Institute of Radio Engineers in 1923.", "title": "Biography" }, { "paragraph_id": 12, "text": "Based on his work at General Electric, John B. Taylor developed a detector ionizing beams of alkali metals, called nowadays the Langmuir-Taylor detector. In 1927, he was one of the participants of the fifth Solvay Conference on Physics that took place at the International Solvay Institute for Physics in Belgium.", "title": "Biography" }, { "paragraph_id": 13, "text": "He joined Katharine B. Blodgett to study thin films and surface adsorption. They introduced the concept of a monolayer (a layer of material one molecule thick) and the two-dimensional physics which describe such a surface. In 1932 he received the Nobel Prize in Chemistry \"for his discoveries and investigations in surface chemistry.\" In 1938, Langmuir's scientific interests began to turn to atmospheric science and meteorology. One of his first ventures, although tangentially related, was a refutation of the claim of entomologist Charles H. T. Townsend that the deer botfly flew at speeds of over 800 miles per hour. Langmuir estimated the fly's speed at 25 miles per hour.", "title": "Biography" }, { "paragraph_id": 14, "text": "After observing windrows of drifting seaweed in the Sargasso Sea he discovered a wind-driven surface circulation in the sea. It is now called the Langmuir circulation.", "title": "Biography" }, { "paragraph_id": 15, "text": "During World War II, Langmuir and Research Associate Vincent J Schaefer worked on improving naval sonar for submarine detection, and later to develop protective smoke screens and methods for deicing aircraft wings. This research led him to theorize and then demonstrate in the laboratory and in the atmosphere, that the introduction of ice nuclei dry ice and silver iodide into a sufficiently moist cloud of low temperature (supercooled water) could induce precipitation (cloud seeding); though in frequent practice, particularly in Australia and the People's Republic of China, the efficiency of this technique remains controversial today.", "title": "Biography" }, { "paragraph_id": 16, "text": "In 1953 Langmuir coined the term \"pathological science\", describing research conducted with accordance to the scientific method, but tainted by unconscious bias or subjective effects. This is in contrast to pseudoscience, which has no pretense of following the scientific method. In his original speech, he presented ESP and flying saucers as examples of pathological science; since then, the label has been applied to polywater and cold fusion.", "title": "Biography" }, { "paragraph_id": 17, "text": "His house in Schenectady, was designated a National Historic Landmark in 1976.", "title": "Biography" }, { "paragraph_id": 18, "text": "Langmuir was married to Marion Mersereau (1883–1971) in 1912 with whom he adopted two children: Kenneth and Barbara. After a short illness, he died in Woods Hole, Massachusetts from a heart attack on August 16, 1957. His obituary ran on the front page of The New York Times.", "title": "Biography" }, { "paragraph_id": 19, "text": "On his religious views, Langmuir was an agnostic.", "title": "Biography" }, { "paragraph_id": 20, "text": "According to author Kurt Vonnegut, Langmuir was the inspiration for his fictional scientist Dr. Felix Hoenikker in the novel Cat's Cradle. The character's invention of ice-nine eventually destroyed the world by seeding a new phase of ice water (similar in name only to Ice IX). Langmuir had worked with Vonnegut's brother, Bernard Vonnegut at General Electric on seeding ice crystals to diminish or increase rain or storms.", "title": "Biography" }, { "paragraph_id": 21, "text": "<ref></Serendipity in Science: Twenty Years at Langmuir University An Autobiography by Vincent J Schaefer, ScD; Compiled and Edited Don Rittner 2013 Circle Square Press, Voorheesville, NY <ref>", "title": "References" } ]
Irving Langmuir was an American chemist, physicist, and engineer. He was awarded the Nobel Prize in Chemistry in 1932 for his work in surface chemistry. Langmuir's most famous publication is the 1919 article "The Arrangement of Electrons in Atoms and Molecules" in which, building on Gilbert N. Lewis's cubical atom theory and Walther Kossel's chemical bonding theory, he outlined his "concentric theory of atomic structure". Langmuir became embroiled in a priority dispute with Lewis over this work; Langmuir's presentation skills were largely responsible for the popularization of the theory, although the credit for the theory itself belongs mostly to Lewis. While at General Electric from 1909 to 1950, Langmuir advanced several fields of physics and chemistry, inventing the gas-filled incandescent lamp and the hydrogen welding technique. The Langmuir Laboratory for Atmospheric Research near Socorro, New Mexico, was named in his honor, as was the American Chemical Society journal for surface science called Langmuir.
2001-12-12T19:46:29Z
2023-12-18T14:54:49Z
[ "Template:Citation", "Template:Cite news", "Template:Wikiquote", "Template:Atomic models", "Template:Presidents of the American Chemical Society", "Template:Use mdy dates", "Template:Née", "Template:Cite book", "Template:Harvnb", "Template:IPAc-en", "Template:US patent", "Template:Nobel Prize in Chemistry Laureates 1926-1950", "Template:1932 Nobel Prize winners", "Template:Cite web", "Template:Internet Archive author", "Template:Nobelprize", "Template:Authority control", "Template:Short description", "Template:Infobox scientist", "Template:Reflist", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Irving_Langmuir
15,365
International Association of Travel Agents Network
The International Airlines Travel Agent Network (IATAN) is a Miami-based trade association in the United States representing the interests of its member companies (airlines) and the U.S. travel distribution network (travel agencies). It is an independent department of the International Air Transport Association (IATA). In addition, it (along with the IATA) is the body responsible for the standard international codes for airlines, airports, hotels, cities and car rental firms. These codes provide a method to link international travel network with international suppliers.
[ { "paragraph_id": 0, "text": "The International Airlines Travel Agent Network (IATAN) is a Miami-based trade association in the United States representing the interests of its member companies (airlines) and the U.S. travel distribution network (travel agencies). It is an independent department of the International Air Transport Association (IATA).", "title": "" }, { "paragraph_id": 1, "text": "In addition, it (along with the IATA) is the body responsible for the standard international codes for airlines, airports, hotels, cities and car rental firms. These codes provide a method to link international travel network with international suppliers.", "title": "" }, { "paragraph_id": 2, "text": "", "title": "References" } ]
The International Airlines Travel Agent Network (IATAN) is a Miami-based trade association in the United States representing the interests of its member companies (airlines) and the U.S. travel distribution network. It is an independent department of the International Air Transport Association (IATA). In addition, it is the body responsible for the standard international codes for airlines, airports, hotels, cities and car rental firms. These codes provide a method to link international travel network with international suppliers.
2022-11-14T20:30:32Z
[ "Template:Business-org-stub", "Template:Refimprove", "Template:Infobox organization", "Template:Reflist", "Template:Cite web", "Template:Commercial air travel" ]
https://en.wikipedia.org/wiki/International_Association_of_Travel_Agents_Network
15,368
Insider trading
Insider trading is the trading of a public company's stock or other securities (such as bonds or stock options) based on material, nonpublic information about the company. In various countries, some kinds of trading based on insider information are illegal. This is because it is seen as unfair to other investors who do not have access to the information, as the investor with insider information could potentially make larger profits than a typical investor could make. The rules governing insider trading are complex and vary significantly from country to country. The extent of enforcement also varies from one country to another. The definition of insider in one jurisdiction can be broad and may cover not only insiders themselves but also any persons related to them, such as brokers, associates, and even family members. A person who becomes aware of non-public information and trades on that basis may be guilty of a crime. Trading by specific insiders, such as employees, is commonly permitted as long as it does not rely on material information, not in the public domain. Many jurisdictions require that such trading be reported so that the transactions can be monitored. In the United States and several other jurisdictions, trading conducted by corporate officers, key employees, directors, or significant shareholders must be reported to the regulator or publicly disclosed, usually within a few business days of the trade. In these cases, insiders in the United States are required to file Form 4 with the U.S. Securities and Exchange Commission (SEC) when buying or selling shares of their own companies. The authors of one study claim that illegal insider trading raises the cost of capital for securities issuers, thus decreasing overall economic growth. Some economists, such as Henry Manne, argued that insider trading should be allowed and could, in fact, benefit markets. There has long been "considerable academic debate" among business and legal scholars over whether or not insider trading should be illegal. Several arguments against outlawing insider trading have been identified: for example, although insider trading is illegal, most insider trading is never detected by law enforcement, and thus the illegality of insider trading might give the public the potentially misleading impression that "stock market trading is an unrigged game that anyone can play." Some legal analysis has questioned whether insider trading actually harms anyone in the legal sense, since some have questioned whether insider trading causes anyone to suffer an actual "loss" and whether anyone who suffers a loss is owed an actual legal duty by the insiders in question. Rules prohibiting or criminalizing insider trading on material non-public information exist in most jurisdictions around the world (Bhattacharya and Daouk, 2002), but the details and the efforts to enforce them vary considerably. In the United States, Sections 16(b) and 10(b) of the Securities Exchange Act of 1934 directly and indirectly address insider trading. The U.S. Congress enacted this law after the stock market crash of 1929. While the United States is generally viewed as making the most serious efforts to enforce its insider trading laws, the broader scope of the European model legislation provides a stricter framework against illegal insider trading. In the European Union and the United Kingdom, all trading on non-public information is, under the rubric of market abuse, subject at a minimum to civil penalties and possible criminal penalties as well. UK's Financial Conduct Authority has the responsibility to investigate and prosecute insider dealing, defined by the Criminal Justice Act 1993. In the United States, Canada, Australia, Germany and Romania for mandatory reporting purposes, corporate insiders are defined as a company's officers, directors and any beneficial owners of more than 10% of a class of the company's equity securities. Trades made by these types of insiders in the company's own stock, based on material non-public information, are considered fraudulent since the insiders are violating the fiduciary duty that they owe to the shareholders. The corporate insider, simply by accepting employment, has undertaken a legal obligation to the shareholders to put the shareholders' interests before their own, in matters related to the corporation. When insiders buy or sell based on company-owned information, they are said to be violating their obligation to the shareholders. For example, illegal insider trading would occur if the chief executive officer of Company A learned (prior to a public announcement) that Company A will be taken over and then bought shares in Company A while knowing that the share price would likely rise. In the United States and many other jurisdictions, "insiders" are not just limited to corporate officials and major shareholders where illegal insider trading is concerned but can include any individual who trades shares based on material non-public information in violation of some duty of trust. This duty may be imputed; for example, in many jurisdictions, in cases where a corporate insider "tips" a friend about non-public information likely to have an effect on the company's share price, the duty the corporate insider owes the company is now imputed to the friend and the friend violates a duty to the company if he trades on the basis of this information. Liability for inside trading violations generally cannot be avoided by passing on the information in an "I scratch your back; you scratch mine" or quid pro quo arrangement if the person receiving the information knew or should have known that the information was material non-public information. In the United States, at least one court has indicated that the insider who releases the non-public information must have done so for an improper purpose. In the case of a person who receives the insider information (called the "tippee"), the tippee must also have been aware that the insider released the information for an improper purpose. One commentator has argued that if Company A's CEO did not trade on undisclosed takeover news, but instead passed the information on to his brother-in-law who traded on it, illegal insider trading would still have occurred (albeit by proxy, by passing it on to a "non-insider" so Company A's CEO would not get his hands dirty). A newer view of insider trading, the misappropriation theory, is now accepted in U.S. law. It states that anyone who misappropriates material non-public information and trades on that information in any stock may be guilty of insider trading. This can include elucidating material non-public information from an insider with the intention of trading on it or passing it on to someone who will. This theory constitutes the background for the securities regulation that enforces the insider trading. Disgorgement represents ill-gotten gains (or losses avoided) resulting from individuals violating the securities laws. In general in the countries where the insider trading is forbidden, the competent Authority seeks disgorgement to ensure that securities law violators do not profit from their illegal activity. When appropriate, the disgorged funds are returned to the injured investors. Disgorgements can be ordered in either administrative proceedings or civil actions, and the cases can be settled or litigated. Payment of disgorgement can be either completely or partially waived based on the defendant demonstrating an inability to pay. In settled administrative proceedings, Enforcement may recommend, if appropriate, that the disgorgement be waived. There are several approaches in order to quantify the disgorgement; an innovative procedure based on probability theory was defined by Marcello Minenna by directly analyzing the time periods of the involved transactions in the insider trading. Proving that someone has been responsible for a trade can be difficult because traders may try to hide behind nominees, offshore companies, and other proxies. The Securities and Exchange Commission (SEC) prosecutes over 50 cases each year, with many being settled administratively out of court. The SEC and several stock exchanges actively monitor trading, looking for suspicious activity. The SEC does not have criminal enforcement authority but can refer serious matters to the U.S. Attorney's Office for further investigation and prosecution. In the United States and most non-European jurisdictions, not all trading on non-public information is illegal insider trading. For example, a person in a restaurant who hears the CEO of Company A at the next table tell the CFO that the company's profits will be higher than expected and then buys the stock is not guilty of insider trading—unless he or she had some closer connection to the company or company officers. However, even where the tippee is not himself an insider, where the tippee knows that the information is non-public and the information is paid for, or the tipper receives a benefit for giving it, then in the broader-scope jurisdictions the subsequent trading is illegal. Notwithstanding, information about a tender offer (usually regarding a merger or acquisition) is held to a higher standard. If this type of information is obtained (directly or indirectly) and there is reason to believe it is nonpublic, there is a duty to disclose it or abstain from trading. In the United States in addition to civil penalties, the trader may also be subject to criminal prosecution for fraud or where SEC regulations have been broken, the U.S. Department of Justice (DOJ) may be called to conduct an independent parallel investigation. If the DOJ finds criminal wrongdoing, the department may file criminal charges. Legal trades by insiders are common, as employees of publicly traded corporations often have stock or stock options. These trades are made public in the United States through Securities and Exchange Commission filings, mainly Form 4. U.S. SEC Rule 10b5-1 clarified that the prohibition against insider trading does not require proof that an insider actually used material nonpublic information when conducting a trade; possession of such information alone is sufficient to violate the provision, and the SEC would infer that an insider in possession of material nonpublic information used this information when conducting a trade. However, SEC Rule 10b5-1 also created for insiders an affirmative defense if the insider can demonstrate that the trades conducted on behalf of the insider were conducted as part of a pre-existing contract or written binding plan for trading in the future. For example, if an insider expects to retire after a specific period of time and, as part of retirement planning, the insider has adopted a written binding plan to sell a specific amount of the company's stock every month for two years, and the insider later comes into possession of material nonpublic information about the company, trades based on the original plan might not constitute prohibited insider trading. Until the 21st century and the European Union's market abuse laws, the United States was the leading country in prohibiting insider trading made on the basis of material non-public information. Thomas Newkirk and Melissa Robertson of the SEC summarize the development of US insider trading laws. Insider trading has a base offense level of 8, which puts it in Zone A under the U.S. Sentencing Guidelines. This means that first-time offenders are eligible to receive probation rather than incarceration. U.S. insider trading prohibitions are based on English and American common law prohibitions against fraud. In 1909, well before the Securities Exchange Act was passed, the United States Supreme Court ruled that a corporate director who bought that company's stock when he knew the stock's price was about to increase committed fraud by buying but not disclosing his inside information. Section 15 of the Securities Act of 1933 contained prohibitions of fraud in the sale of securities, later greatly strengthened by the Securities Exchange Act of 1934. Section 16(b) of the Securities Exchange Act of 1934 prohibits short-swing profits (from any purchases and sales within any six-month period) made by corporate directors, officers, or stockholders owning more than 10% of a firm's shares. Under Section 10(b) of the 1934 Act, SEC Rule 10b-5, prohibits fraud related to securities trading. The Insider Trading Sanctions Act of 1984 and the Insider Trading and Securities Fraud Enforcement Act of 1988 place penalties for illegal insider trading as high as three times the amount of profit gained or loss avoided from illegal trading. SEC regulation FD ("Fair Disclosure") requires that if a company intentionally discloses material non-public information to one person, it must simultaneously disclose that information to the public at large. In the case of unintentional disclosure of material non-public information to one person, the company must make a public disclosure "promptly". Insider trading, or similar practices, are also regulated by the SEC under its rules on takeovers and tender offers under the Williams Act. Much of the development of insider trading law has resulted from court decisions. In 1909, the Supreme Court of the United States ruled in Strong v. Repide that a director who expects to act in a way that affects the value of shares cannot use that knowledge to acquire shares from those who do not know of the expected action. Even though, in general, ordinary relations between directors and shareholders in a business corporation are not of such a fiduciary nature as to make it the duty of a director to disclose to a shareholder general knowledge regarding the value of the shares of the company before he purchases any from a shareholder, some cases involve special facts that impose such duty. In 1968, the Second Circuit Court of Appeals advanced a "level playing field" theory of insider trading in SEC v. Texas Gulf Sulphur Co. The court stated that anyone in possession of inside information must either disclose the information or refrain from trading. Officers of the Texas Gulf Sulphur Company had used inside information about the discovery of the Kidd Mine to make profits by buying shares and call options on company stock. In 1984, the Supreme Court of the United States ruled in the case of Dirks v. Securities and Exchange Commission that tippees (receivers of second-hand information) are liable if they had reason to believe that the tipper had breached a fiduciary duty in disclosing confidential information. One such example would be if the tipper received any personal benefit from the disclosure, thereby breaching his or her duty of loyalty to the company. In Dirks, the "tippee" received confidential information from an insider, a former employee of a company. The reason the insider disclosed the information to the tippee, and the reason the tippee disclosed the information to third parties, was to blow the whistle on massive fraud at the company. As a result of the tippee's efforts the fraud was uncovered, and the company went into bankruptcy. But, while the tippee had given the "inside" information to clients who made profits from the information, the U.S. Supreme Court ruled that the tippee could not be held liable under the federal securities laws—for the simple reason that the insider from whom he received the information was not releasing the information for an improper purpose (a personal benefit), but rather for the purpose of exposing the fraud. The Supreme Court ruled that the tippee could not have been aiding and abetting a securities law violation committed by the insider—for the simple reason that no securities law violation had been committed by the insider. (In 2019, in the case of United States v. Blaszczak, the U.S. Court of Appeals for the Second Circuit ruled that the “personal-benefit” test announced in Dirks does not apply to Title 18 fraud statutes, such as 18 USC 1348.) In Dirks, the Supreme Court also defined the concept of "constructive insiders", who are lawyers, investment bankers, and others who receive confidential information from a corporation while providing services to the corporation. Constructive insiders are also liable for insider trading violations if the corporation expects the information to remain confidential, since they acquire the fiduciary duties of the true insider. The next expansion of insider trading liability came in SEC vs. Materia 745 F.2d 197 (2d Cir. 1984), the case that first introduced the misappropriation theory of liability for insider trading. Materia, a financial printing firm proofreader, and clearly not an insider by any definition, was found to have determined the identity of takeover targets based on proofreading tender offer documents in the course of his employment. After a two-week trial, the district court found him liable for insider trading, and the Second Circuit Court of Appeals affirmed holding that the theft of information from an employer, and the use of that information to purchase or sell securities in another entity, constituted a fraud in connection with the purchase or sale of a securities. The misappropriation theory of insider trading was born, and liability further expanded to encompass a larger group of outsiders. In United States v. Carpenter (1986) the U.S. Supreme Court cited an earlier ruling while unanimously upholding mail and wire fraud convictions for a defendant who received his information from a journalist rather than from the company itself. The journalist R. Foster Winans was also convicted, on the grounds that he had misappropriated information belonging to his employer, The Wall Street Journal. In that widely publicized case, Winans traded in advance of "Heard on the Street" columns appearing in the Journal. The Court stated in Carpenter: "It is well established, as a general proposition, that a person who acquires special knowledge or information by virtue of a confidential or fiduciary relationship with another is not free to exploit that knowledge or information for his own personal benefit but must account to his principal for any profits derived therefrom." However, in upholding the securities fraud (insider trading) convictions, the justices were evenly split. In 1997, the U.S. Supreme Court adopted the misappropriation theory of insider trading in United States v. O'Hagan, 521 U.S. 642, 655 (1997). O'Hagan was a partner in a law firm representing Grand Metropolitan, while it was considering a tender offer for Pillsbury Company. O'Hagan used this inside information by buying call options on Pillsbury stock, resulting in profits of over $4.3 million. O'Hagan claimed that neither he nor his firm owed a fiduciary duty to Pillsbury, so he did not commit fraud by purchasing Pillsbury options. The Court rejected O'Hagan's arguments and upheld his conviction. The "misappropriation theory" holds that a person commits fraud "in connection with" a securities transaction and thereby violates 10(b) and Rule 10b-5, when he misappropriates confidential information for securities trading purposes, in breach of a duty owed to the source of the information. Under this theory, a fiduciary's undisclosed, self-serving use of a principal's information to purchase or sell securities, in breach of a duty of loyalty and confidentiality, defrauds the principal of the exclusive use of the information. In lieu of premising liability on a fiduciary relationship between company insider and purchaser or seller of the company's stock, the misappropriation theory premises liability on a fiduciary-turned-trader's deception of those who entrusted him with access to confidential information. The Court specifically recognized that a corporation's information is its property: "A company's confidential information ... qualifies as property to which the company has a right of exclusive use. The undisclosed misappropriation of such information in violation of a fiduciary duty ... constitutes fraud akin to embezzlement – the fraudulent appropriation to one's own use of the money or goods entrusted to one's care by another." In 2000, the SEC enacted SEC Rule 10b5-1, which defined trading "on the basis of" inside information as any time a person trades while aware of material nonpublic information. It is no longer a defense for one to say that one would have made the trade anyway. The rule also created an affirmative defense for pre-planned trades. In Morgan Stanley v. Skowron, 989 F. Supp. 2d 356 (S.D.N.Y. 2013), applying New York's faithless servant doctrine, the court held that a hedge fund's portfolio manager engaging in insider trading in violation of his company's code of conduct, which also required him to report his misconduct, must repay his employer the full $31 million his employer paid him as compensation during his period of faithlessness. The court called the insider trading the "ultimate abuse of a portfolio manager's position". The judge also wrote: "In addition to exposing Morgan Stanley to government investigations and direct financial losses, Skowron's behavior damaged the firm's reputation, a valuable corporate asset." In 2014, in the case of United States v. Newman, the United States Court of Appeals for the Second Circuit cited the Supreme Court's decision in Dirks, and ruled that for a "tippee" (a person who used information they received from an insider) to be guilty of insider trading, the tippee must have been aware not only that the information was insider information, but must also have been aware that the insider released the information for an improper purpose (such as a personal benefit). The Court concluded that the insider's breach of a fiduciary duty not to release confidential information—in the absence of an improper purpose on the part of the insider—is not enough to impose criminal liability on either the insider or the tippee. In 2016, in the case of Salman v. United States, the U.S. Supreme Court held that the benefit a tipper must receive as predicate for an insider-trader prosecution of a tippee need not be pecuniary, and that giving a 'gift' of a tip to a family member is presumptively an act for the personal though intangible benefit of the tipper. Members of the US Congress are not exempt from the laws that ban insider trading. Because they generally do not have a confidential relationship with the source of the information they receive, however, they do not meet the usual definition of an "insider". House of Representatives rules may however consider congressional insider trading unethical. A 2004 study found that stock sales and purchases by senators outperformed the market by 12.3% per year. Peter Schweizer points out several examples of insider trading by members of Congress, including action taken by Spencer Bachus following a private, behind-the-doors meeting on the evening of September 18, 2008 when Hank Paulson and Ben Bernanke informed members of Congress about the issues due to the financial crisis of 2007–2008, Bachus then shorted stocks the next morning and cashed in his profits within a week. Also attending the same meeting were Senator Dick Durbin and House Speaker John Boehner; the same day (trade effective the next day), Durbin sold mutual-fund shares worth $42,696, and reinvested it all with Warren Buffett. Also the same day (trade effective the next day), Boehner cashed out of an equity mutual fund. In May 2007, a bill entitled the Stop Trading on Congressional Knowledge Act, or STOCK Act was introduced that would hold congressional and federal employees liable for stock trades they made using information they gained through their jobs and also regulate analysts or political intelligence firms that research government activities. The STOCK Act was enacted on April 4, 2012. As of 2021, in the approximately nine month period up to September 2021, Senate and House members disclosed 4,000 trades worth at least $315 million of stocks and bonds. Some economists and legal scholars (such as Henry Manne, Milton Friedman, Thomas Sowell, Daniel Fischel, and Frank H. Easterbrook) have argued that laws against insider trading should be repealed. They claim that insider trading based on material nonpublic information benefits investors, in general, by more quickly introducing new information into the market. Friedman, laureate of the Nobel Memorial Prize in Economics, said: "You want more insider trading, not less. You want to give the people most likely to have knowledge about deficiencies of the company an incentive to make the public aware of that." Friedman did not believe that the trader should be required to make his trade known to the public, because the buying or selling pressure itself is information for the market. Other critics argue that insider trading is a victimless act: a willing buyer and a willing seller agree to trade property that the seller rightfully owns, with no prior contract (according to this view) having been made between the parties to refrain from trading if there is asymmetric information. The Atlantic has described the process as "arguably the closest thing that modern finance has to a victimless crime". Legalization advocates also question why "trading" where one party has more information than the other is legal in other markets, such as real estate, but not in the stock market. For example, if a geologist knows there is a high likelihood of the discovery of petroleum under Farmer Smith's land, he may be entitled to make Smith an offer for the land, and buy it, without first telling Farmer Smith of the geological data. Advocates of legalization make free speech arguments. Punishment for communicating about a development pertinent to the next day's stock price might seem an act of censorship. If the information being conveyed is proprietary information and the corporate insider has contracted to not expose it, he has no more right to communicate it than he would to tell others about the company's confidential new product designs, formulas, or bank account passwords. Some authors have used these arguments to propose legalizing insider trading on negative information (but not on positive information). Since negative information is often withheld from the market, trading on such information has a higher value for the market than trading on positive information. There are very limited laws against "insider trading" in the commodities markets if, for no other reason than that the concept of an "insider" is not immediately analogous to commodities themselves (corn, wheat, steel, etc.). However, analogous activities such as front running are illegal under US commodity and futures trading laws. For example, a commodity broker can be charged with fraud for receiving a large purchase order from a client (one likely to affect the price of that commodity) and then purchasing that commodity before executing the client's order to benefit from the anticipated price increase. The advent of the Internet has provided a forum for the commercialisation of trading on insider information. In 2016 a number of dark web sites were identified as marketplaces where such non-public information was bought and sold. At least one such site used bitcoins to avoid currency restrictions and to impede tracking. Such sites also provide a place for soliciting for corporate informants, where non-public information may be used for purposes other than stock trading. The US and the UK vary in the way the law is interpreted and applied with regard to insider trading. In the UK, the relevant laws are the Criminal Justice Act 1993, Part V, Schedule 1; the Financial Services and Markets Act 2000, which defines an offence of "market abuse"; and the European Union Regulation No 596/2014. The principle is that it is illegal to trade on the basis of market-sensitive information that is not generally known. This is a much broader scope that under U.S. law. The key differences from U.S. law are that no relationship to either the issuer of the security or the tipster is required; all that is required is that the guilty party traded (or caused trading) whilst having inside information, and there is no scienter requirement under UK law. Japan enacted its first law against insider trading in 1988. Roderick Seeman said, "Even today many Japanese do not understand why this is illegal. Indeed, previously it was regarded as common sense to make a profit from your knowledge." In Malta the law follows the European broader scope model. The relevant statute is the Prevention of Financial Markets Abuse Act of 2005, as amended. Earlier acts included the Financial Markets Abuse Act in 2002, and the Insider Dealing and Market Abuse Act of 1994. The International Organization of Securities Commissions (IOSCO) paper on the "Objectives and Principles of Securities Regulation" (updated to 2003) states that the three objectives of good securities market regulation are investor protection, ensuring that markets are fair, efficient and transparent, and reducing systemic risk. The discussion of these "Core Principles" state that "investor protection" in this context means "Investors should be protected from misleading, manipulative or fraudulent practices, including insider trading, front running or trading ahead of customers and the misuse of client assets." More than 85 percent of the world's securities and commodities market regulators are members of IOSCO and have signed on to these Core Principles. The World Bank and International Monetary Fund now use the IOSCO Core Principles in reviewing the financial health of different country's regulatory systems as part of these organization's financial sector assessment program, so laws against insider trading based on non-public information are now expected by the international community. Enforcement of insider trading laws varies widely from country to country, but the vast majority of jurisdictions now outlaw the practice, at least in principle. Larry Harris claims that differences in the effectiveness with which countries restrict insider trading help to explain the differences in executive compensation among those countries. The US, for example, has much higher CEO salaries than have Japan or Germany, where insider trading is less effectively restrained. In 2014, the European Union (EU) adopted legislation (Criminal Sanctions for Market Abuse Directive) that harmonised criminal sanctions for insider dealing. All EU Member States agreed to introduce maximum prison sentences of at least four years for serious cases of market manipulation and insider dealing, and at least two years for improper disclosure of insider information. The current Australian legislation arose out of the report of a 1989 parliamentary committee report which recommended removal of the requirement that the trader be 'connected' with the body corporate. This may have weakened the importance of the fiduciary duty rationale and possibly brought new potential offenders within its ambit. In Australia if a person possesses inside information and knows, or ought reasonably to know, that the information is not generally available and is materially price sensitive then the insider must not trade. Nor must she or he procure another to trade and must not tip another. Information will be considered generally available if it consists of readily observable matter or it has been made known to common investors and a reasonable period for it to be disseminated among such investors has elapsed. In 2009, a journalist in Nettavisen (Thomas Gulbrandsen) was sentenced to 4 months in prison for insider trading. The longest prison sentence in a Norwegian trial where the main charge was insider trading, was for eight years (two suspended) when Alain Angelil was convicted in a district court on December 9, 2011. Although insider trading in the UK has been illegal since 1980, it proved difficult to successfully prosecute individuals accused of insider trading. There were a number of notorious cases where individuals were able to escape prosecution. Instead the UK regulators relied on a series of fines to punish market abuses. These fines were widely perceived as an ineffective deterrent, and there was a statement of intent by the UK regulator (the Financial Services Authority) to use its powers to enforce the legislation (specifically the Financial Services and Markets Act 2000). Between 2009 and 2012 the FSA secured 14 convictions in relation to insider dealing. Anil Kumar, a senior partner at management consulting firm McKinsey & Company, pleaded guilty in 2010 to insider trading in a "descent from the pinnacle of the business world". Chip Skowron, a hedge fund co-portfolio manager of FrontPoint Partners LLC's health care funds, was convicted of insider trading in 2011, for which he served five years in prison. He had been tipped off by a consultant to a company that the company was about to make a negative announcement regarding its clinical trial for a drug. At first Skowron denied the charges against him, and his defense attorney said he would plead not guilty, saying "We look forward to responding to the allegations more fully in court at the appropriate time". However, after the consultant charged with tipping him off pleaded guilty, he changed his position, and admitted his guilt. Rajat Gupta, who had been managing partner of McKinsey & Co. and a director at Goldman Sachs Group Inc. and Procter & Gamble Co., was convicted by a federal jury in 2012 and sentence to two years in prison for leaking inside information to hedge fund manager Raj Rajaratnam who was sentenced to 11 years in prison. The case was prosecuted by the office of United States Attorney for the Southern District of New York Preet Bharara. Mathew Martoma, former hedge fund trader and portfolio manager at S.A.C. Capital Advisors, was accused of generating possibly the largest single insider trading transaction profit in history at a value of $276 million. He was convicted in February 2014, and is serving a nine-year prison sentence. With the guilty plea by Perkins Hixon in 2014 for insider trading from 2010 to 2013 while at Evercore Partners, Bharara said in a press release that 250 defendants whom his office had charged since August 2009 had now been convicted. On December 10, 2014, a federal appeals court overturned the insider trading convictions of two former hedge fund traders, Todd Newman and Anthony Chiasson, based on the "erroneous" instructions given to jurors by the trial judge. The decision was expected to affect the appeal of the separate insider-trading conviction of former SAC Capital portfolio manager Michael Steinberg and the U.S. Attorney and the SEC in 2015 did drop their cases against Steinberg and others. In 2016, Sean Stewart, a former managing director at Perella Weinberg Partners LP and vice president at JPMorgan Chase, was convicted on allegations he tipped his father on pending health-care deals. The father, Robert Stewart, previously had pleaded guilty but didn't testify during his son's trial. It was argued that by way of compensation for the tip, the father had paid more than $10,000 for Sean's wedding photographer. In 2017, Billy Walters, Las Vegas sports bettor, was convicted of making $40 million on private information of Dallas-based dairy processing company Dean Foods, and sentenced to five years in prison. Walters's source, company director Thomas C. Davis employing a prepaid cell phone and sometimes the code words "Dallas Cowboys" for Dean Foods, helped him from 2008 to 2014 realize profits and avoid losses in the stock, the federal jury found. Golfer Phil Mickelson "was also mentioned during the trial as someone who had traded in Dean Foods shares and once owed nearly $2 million in gambling debts to" Walters. Mickelson "made roughly $1 million trading Dean Foods shares; he agreed to forfeit those profits in a related civil case brought by the Securities and Exchange Commission". Walters appealed the verdict, but in December 2018 his conviction was upheld by the 2nd U.S. Circuit Court of Appeals in Manhattan. In 2018, David Blaszczak, the "king of political intelligence," Theodore Huber and Robert Olan, two partners at hedge fund Deerfield Management, and Christopher Worrall, an employee at the Centers for Medicare and Medicaid Services (CMS), were convicted for insider trading by the U.S. Attorney's Office in the Southern District of New York. Worrall leaked confidential government information that he stole from CMS to Blaszczak, and Blaszczak passed that information to Huber and Olan, who made $7 million trading securities. The convictions were upheld in 2019 by the Second Circuit, U.S. Court of Appeals in Manhattan; that opinion was vacated by the Supreme Court in 2021, and the Second Circuit is now reconsidering its decision. In 2021, Puneet Dikshit, a partner at McKinsey, pled guilty to trading on inside information that he had access to while advising Goldman Sachs on its acquisition of GreenSky, Inc. Dikshit was the third McKinsey partner to be convicted of insider trading in the Southern District of New York. In 2023, Terren Peizer was charged with insider trading by the SEC, which alleged that he sold $20 million of Ontrak Inc. stock while he was in possession of material nonpublic negative information. Peizer was the CEO and chairman of Ontrak. In addition, the U.S. Department of Justice announced criminal charges of securities fraud against Peizer, charging that thereby he had avoided $12 million in losses; he was arrested. The case is assigned to the U.S. District Court for the Central District of California before U.S. District Judge Dale S. Fischer. If convicted, Peizer could face up to 65 years in prison. In 2008, police uncovered an insider trading conspiracy involving Bay Street and Wall Street lawyer Gil Cornblum who had worked at Sullivan & Cromwell and was working at Dorsey & Whitney, and a former lawyer, Stan Grmovsek, who were found to have gained over $10 million in illegal profits over a 14-year span. Cornblum committed suicide by jumping from a bridge as he was under investigation and shortly before he was to be arrested but before criminal charges were laid against him, one day before his alleged co-conspirator Grmovsek pled guilty. Grmovsek pleaded guilty to insider trading and was sentenced to 39 months in prison. This was the longest term ever imposed for insider trading in Canada. These crimes were explored in Mark Coakley's 2011 non-fiction book, Tip and Trade. The U.S. SEC alleged that in 2009 Kuwaiti trader Hazem Al-Braikan engaged in insider trading after misleading the public about possible takeover bids for two companies. Three days after Al-Braikan was sued by the SEC, he was found dead of a gunshot wound to the head in his home in Kuwait City on July 26, 2009, in what Kuwaiti police called a suicide. The SEC later reached a $6.5 million settlement of civil insider trading charges, with his estate and others. The majority of shares in China before 2005 were non-tradeable shares that were not sold on the stock exchange publicly but privately. To make shares more accessible, the China Securities Regulation Commission (CSRC) required the companies to convert the non-tradeable shares into tradeable shares. There was a deadline for companies to convert their shares and the deadline was short, due to this there was a massive amount of exchanges and in the midst of these exchanges many people committed insider trading knowing that the selling of these shares would affect prices. Chinese people did not fear insider trading as much as one may in the United States because there is no possibility of imprisonment. Punishment may include monetary fees or temporary relieving from a position in the company. The Chinese do not view insider trading as a crime worth prison time because generally the person has a clean record and a path of success with references to deter them from being viewed as a criminal. On October 1, 2015, Chinese fund manager Xu Xiang was arrested due to insider trading. Insider trading in India is an offense according to Sections 12A, 15G of the Securities and Exchange Board of India Act, 1992. Insider trading is when one with access to non-public, price-sensitive information about the securities of the company subscribes, buys, sells, or deals, or agrees to do so or counsels another to do so as principal or agent. Price-sensitive information is information that materially affects the value of the securities. The penalty for insider trading is imprisonment, which may extend to five years, and a minimum of five lakh rupees (500,000) to 25 crore rupees (250 million) or three times the profit made, whichever is higher. The Wall Street Journal, in a 2014 article entitled "Why It's Hard to Catch India's Insider Trading", said that despite a widespread belief that insider trading takes place on a regular basis in India, there were few examples of insider traders being prosecuted in India. One former top regulator said that in India insider trading is deeply rooted and especially rampant because regulators do not have the tools to address it. In the few cases where prosecution has taken place, cases have sometimes taken more than a decade to reach trial, and punishments have been light; and despite SEBI by law having the ability to demand penalties of up to $4 million, the few fines that were levied for insider trading have usually been under $200,000. Under Republic Act 8799 or the Securities Regulation Code, insider trading in the Philippines is illegal. The practice of insider trading is an illegal act under Brazilian law, since it constitutes unfair behavior that threatens the security and equality of legal conditions in the market. Since 2001, the practice is also considered a crime. Law 6,385/1976, as amended by Law 10,303/2001, provided for Article 27-D, which typifies the conduct of "Using relevant information not yet disclosed to the market, of which he is aware and from which he must maintain secrecy, capable of providing, for himself or for others, undue advantage, through trading, on his own behalf or on behalf of a third party, with securities: Penalty - imprisonment, from 1 (one) to 5 (five) years, and a fine of up to 3 (three) times the amount of the illicit advantage obtained as a result of the crime." The first conviction handed down in Brazil for the practice of the offense of "misuse of privileged information" occurred in 2011, by federal judge Marcelo Costenaro Cavali, of the Sixth Criminal Court of São Paulo. It is the case of the Sadia-Perdigão merger. The former Director of Finance and Investor Relations, Luiz Gonzaga Murat Júnior, was sentenced to one year and nine months in prison in an open regime, replaceable by community service, and the inability to exercise the position of administrator or fiscal councilor of a publicly traded company for the time he serves his sentence, in addition to a fine of R$349,711.53. The then member of the board of directors Romano Ancelmo Fontana Filho was sentenced to prison for one year and five months in an open regime, also replaceable by community service, in addition to not being able to exercise the position of administrator or fiscal councilor of a publicly-held company. He was also fined R$374,940.52. General information Articles and opinions Data on insider trading
[ { "paragraph_id": 0, "text": "Insider trading is the trading of a public company's stock or other securities (such as bonds or stock options) based on material, nonpublic information about the company. In various countries, some kinds of trading based on insider information are illegal. This is because it is seen as unfair to other investors who do not have access to the information, as the investor with insider information could potentially make larger profits than a typical investor could make. The rules governing insider trading are complex and vary significantly from country to country. The extent of enforcement also varies from one country to another. The definition of insider in one jurisdiction can be broad and may cover not only insiders themselves but also any persons related to them, such as brokers, associates, and even family members. A person who becomes aware of non-public information and trades on that basis may be guilty of a crime.", "title": "" }, { "paragraph_id": 1, "text": "Trading by specific insiders, such as employees, is commonly permitted as long as it does not rely on material information, not in the public domain. Many jurisdictions require that such trading be reported so that the transactions can be monitored. In the United States and several other jurisdictions, trading conducted by corporate officers, key employees, directors, or significant shareholders must be reported to the regulator or publicly disclosed, usually within a few business days of the trade. In these cases, insiders in the United States are required to file Form 4 with the U.S. Securities and Exchange Commission (SEC) when buying or selling shares of their own companies. The authors of one study claim that illegal insider trading raises the cost of capital for securities issuers, thus decreasing overall economic growth. Some economists, such as Henry Manne, argued that insider trading should be allowed and could, in fact, benefit markets.", "title": "" }, { "paragraph_id": 2, "text": "There has long been \"considerable academic debate\" among business and legal scholars over whether or not insider trading should be illegal. Several arguments against outlawing insider trading have been identified: for example, although insider trading is illegal, most insider trading is never detected by law enforcement, and thus the illegality of insider trading might give the public the potentially misleading impression that \"stock market trading is an unrigged game that anyone can play.\" Some legal analysis has questioned whether insider trading actually harms anyone in the legal sense, since some have questioned whether insider trading causes anyone to suffer an actual \"loss\" and whether anyone who suffers a loss is owed an actual legal duty by the insiders in question.", "title": "" }, { "paragraph_id": 3, "text": "Rules prohibiting or criminalizing insider trading on material non-public information exist in most jurisdictions around the world (Bhattacharya and Daouk, 2002), but the details and the efforts to enforce them vary considerably. In the United States, Sections 16(b) and 10(b) of the Securities Exchange Act of 1934 directly and indirectly address insider trading. The U.S. Congress enacted this law after the stock market crash of 1929. While the United States is generally viewed as making the most serious efforts to enforce its insider trading laws, the broader scope of the European model legislation provides a stricter framework against illegal insider trading. In the European Union and the United Kingdom, all trading on non-public information is, under the rubric of market abuse, subject at a minimum to civil penalties and possible criminal penalties as well. UK's Financial Conduct Authority has the responsibility to investigate and prosecute insider dealing, defined by the Criminal Justice Act 1993.", "title": "Illegal" }, { "paragraph_id": 4, "text": "In the United States, Canada, Australia, Germany and Romania for mandatory reporting purposes, corporate insiders are defined as a company's officers, directors and any beneficial owners of more than 10% of a class of the company's equity securities. Trades made by these types of insiders in the company's own stock, based on material non-public information, are considered fraudulent since the insiders are violating the fiduciary duty that they owe to the shareholders. The corporate insider, simply by accepting employment, has undertaken a legal obligation to the shareholders to put the shareholders' interests before their own, in matters related to the corporation. When insiders buy or sell based on company-owned information, they are said to be violating their obligation to the shareholders.", "title": "Illegal" }, { "paragraph_id": 5, "text": "For example, illegal insider trading would occur if the chief executive officer of Company A learned (prior to a public announcement) that Company A will be taken over and then bought shares in Company A while knowing that the share price would likely rise. In the United States and many other jurisdictions, \"insiders\" are not just limited to corporate officials and major shareholders where illegal insider trading is concerned but can include any individual who trades shares based on material non-public information in violation of some duty of trust. This duty may be imputed; for example, in many jurisdictions, in cases where a corporate insider \"tips\" a friend about non-public information likely to have an effect on the company's share price, the duty the corporate insider owes the company is now imputed to the friend and the friend violates a duty to the company if he trades on the basis of this information.", "title": "Illegal" }, { "paragraph_id": 6, "text": "Liability for inside trading violations generally cannot be avoided by passing on the information in an \"I scratch your back; you scratch mine\" or quid pro quo arrangement if the person receiving the information knew or should have known that the information was material non-public information. In the United States, at least one court has indicated that the insider who releases the non-public information must have done so for an improper purpose. In the case of a person who receives the insider information (called the \"tippee\"), the tippee must also have been aware that the insider released the information for an improper purpose.", "title": "Illegal" }, { "paragraph_id": 7, "text": "One commentator has argued that if Company A's CEO did not trade on undisclosed takeover news, but instead passed the information on to his brother-in-law who traded on it, illegal insider trading would still have occurred (albeit by proxy, by passing it on to a \"non-insider\" so Company A's CEO would not get his hands dirty).", "title": "Illegal" }, { "paragraph_id": 8, "text": "A newer view of insider trading, the misappropriation theory, is now accepted in U.S. law. It states that anyone who misappropriates material non-public information and trades on that information in any stock may be guilty of insider trading. This can include elucidating material non-public information from an insider with the intention of trading on it or passing it on to someone who will. This theory constitutes the background for the securities regulation that enforces the insider trading. Disgorgement represents ill-gotten gains (or losses avoided) resulting from individuals violating the securities laws. In general in the countries where the insider trading is forbidden, the competent Authority seeks disgorgement to ensure that securities law violators do not profit from their illegal activity. When appropriate, the disgorged funds are returned to the injured investors. Disgorgements can be ordered in either administrative proceedings or civil actions, and the cases can be settled or litigated. Payment of disgorgement can be either completely or partially waived based on the defendant demonstrating an inability to pay. In settled administrative proceedings, Enforcement may recommend, if appropriate, that the disgorgement be waived. There are several approaches in order to quantify the disgorgement; an innovative procedure based on probability theory was defined by Marcello Minenna by directly analyzing the time periods of the involved transactions in the insider trading.", "title": "Illegal" }, { "paragraph_id": 9, "text": "Proving that someone has been responsible for a trade can be difficult because traders may try to hide behind nominees, offshore companies, and other proxies. The Securities and Exchange Commission (SEC) prosecutes over 50 cases each year, with many being settled administratively out of court. The SEC and several stock exchanges actively monitor trading, looking for suspicious activity. The SEC does not have criminal enforcement authority but can refer serious matters to the U.S. Attorney's Office for further investigation and prosecution.", "title": "Illegal" }, { "paragraph_id": 10, "text": "In the United States and most non-European jurisdictions, not all trading on non-public information is illegal insider trading. For example, a person in a restaurant who hears the CEO of Company A at the next table tell the CFO that the company's profits will be higher than expected and then buys the stock is not guilty of insider trading—unless he or she had some closer connection to the company or company officers. However, even where the tippee is not himself an insider, where the tippee knows that the information is non-public and the information is paid for, or the tipper receives a benefit for giving it, then in the broader-scope jurisdictions the subsequent trading is illegal.", "title": "Illegal" }, { "paragraph_id": 11, "text": "Notwithstanding, information about a tender offer (usually regarding a merger or acquisition) is held to a higher standard. If this type of information is obtained (directly or indirectly) and there is reason to believe it is nonpublic, there is a duty to disclose it or abstain from trading.", "title": "Illegal" }, { "paragraph_id": 12, "text": "In the United States in addition to civil penalties, the trader may also be subject to criminal prosecution for fraud or where SEC regulations have been broken, the U.S. Department of Justice (DOJ) may be called to conduct an independent parallel investigation. If the DOJ finds criminal wrongdoing, the department may file criminal charges.", "title": "Illegal" }, { "paragraph_id": 13, "text": "Legal trades by insiders are common, as employees of publicly traded corporations often have stock or stock options. These trades are made public in the United States through Securities and Exchange Commission filings, mainly Form 4.", "title": "Legal" }, { "paragraph_id": 14, "text": "U.S. SEC Rule 10b5-1 clarified that the prohibition against insider trading does not require proof that an insider actually used material nonpublic information when conducting a trade; possession of such information alone is sufficient to violate the provision, and the SEC would infer that an insider in possession of material nonpublic information used this information when conducting a trade. However, SEC Rule 10b5-1 also created for insiders an affirmative defense if the insider can demonstrate that the trades conducted on behalf of the insider were conducted as part of a pre-existing contract or written binding plan for trading in the future.", "title": "Legal" }, { "paragraph_id": 15, "text": "For example, if an insider expects to retire after a specific period of time and, as part of retirement planning, the insider has adopted a written binding plan to sell a specific amount of the company's stock every month for two years, and the insider later comes into possession of material nonpublic information about the company, trades based on the original plan might not constitute prohibited insider trading.", "title": "Legal" }, { "paragraph_id": 16, "text": "Until the 21st century and the European Union's market abuse laws, the United States was the leading country in prohibiting insider trading made on the basis of material non-public information. Thomas Newkirk and Melissa Robertson of the SEC summarize the development of US insider trading laws. Insider trading has a base offense level of 8, which puts it in Zone A under the U.S. Sentencing Guidelines. This means that first-time offenders are eligible to receive probation rather than incarceration.", "title": "United States law" }, { "paragraph_id": 17, "text": "U.S. insider trading prohibitions are based on English and American common law prohibitions against fraud. In 1909, well before the Securities Exchange Act was passed, the United States Supreme Court ruled that a corporate director who bought that company's stock when he knew the stock's price was about to increase committed fraud by buying but not disclosing his inside information.", "title": "United States law" }, { "paragraph_id": 18, "text": "Section 15 of the Securities Act of 1933 contained prohibitions of fraud in the sale of securities, later greatly strengthened by the Securities Exchange Act of 1934.", "title": "United States law" }, { "paragraph_id": 19, "text": "Section 16(b) of the Securities Exchange Act of 1934 prohibits short-swing profits (from any purchases and sales within any six-month period) made by corporate directors, officers, or stockholders owning more than 10% of a firm's shares. Under Section 10(b) of the 1934 Act, SEC Rule 10b-5, prohibits fraud related to securities trading.", "title": "United States law" }, { "paragraph_id": 20, "text": "The Insider Trading Sanctions Act of 1984 and the Insider Trading and Securities Fraud Enforcement Act of 1988 place penalties for illegal insider trading as high as three times the amount of profit gained or loss avoided from illegal trading.", "title": "United States law" }, { "paragraph_id": 21, "text": "SEC regulation FD (\"Fair Disclosure\") requires that if a company intentionally discloses material non-public information to one person, it must simultaneously disclose that information to the public at large. In the case of unintentional disclosure of material non-public information to one person, the company must make a public disclosure \"promptly\".", "title": "United States law" }, { "paragraph_id": 22, "text": "Insider trading, or similar practices, are also regulated by the SEC under its rules on takeovers and tender offers under the Williams Act.", "title": "United States law" }, { "paragraph_id": 23, "text": "Much of the development of insider trading law has resulted from court decisions.", "title": "United States law" }, { "paragraph_id": 24, "text": "In 1909, the Supreme Court of the United States ruled in Strong v. Repide that a director who expects to act in a way that affects the value of shares cannot use that knowledge to acquire shares from those who do not know of the expected action. Even though, in general, ordinary relations between directors and shareholders in a business corporation are not of such a fiduciary nature as to make it the duty of a director to disclose to a shareholder general knowledge regarding the value of the shares of the company before he purchases any from a shareholder, some cases involve special facts that impose such duty.", "title": "United States law" }, { "paragraph_id": 25, "text": "In 1968, the Second Circuit Court of Appeals advanced a \"level playing field\" theory of insider trading in SEC v. Texas Gulf Sulphur Co. The court stated that anyone in possession of inside information must either disclose the information or refrain from trading. Officers of the Texas Gulf Sulphur Company had used inside information about the discovery of the Kidd Mine to make profits by buying shares and call options on company stock.", "title": "United States law" }, { "paragraph_id": 26, "text": "In 1984, the Supreme Court of the United States ruled in the case of Dirks v. Securities and Exchange Commission that tippees (receivers of second-hand information) are liable if they had reason to believe that the tipper had breached a fiduciary duty in disclosing confidential information. One such example would be if the tipper received any personal benefit from the disclosure, thereby breaching his or her duty of loyalty to the company. In Dirks, the \"tippee\" received confidential information from an insider, a former employee of a company. The reason the insider disclosed the information to the tippee, and the reason the tippee disclosed the information to third parties, was to blow the whistle on massive fraud at the company. As a result of the tippee's efforts the fraud was uncovered, and the company went into bankruptcy. But, while the tippee had given the \"inside\" information to clients who made profits from the information, the U.S. Supreme Court ruled that the tippee could not be held liable under the federal securities laws—for the simple reason that the insider from whom he received the information was not releasing the information for an improper purpose (a personal benefit), but rather for the purpose of exposing the fraud. The Supreme Court ruled that the tippee could not have been aiding and abetting a securities law violation committed by the insider—for the simple reason that no securities law violation had been committed by the insider.", "title": "United States law" }, { "paragraph_id": 27, "text": "(In 2019, in the case of United States v. Blaszczak, the U.S. Court of Appeals for the Second Circuit ruled that the “personal-benefit” test announced in Dirks does not apply to Title 18 fraud statutes, such as 18 USC 1348.)", "title": "United States law" }, { "paragraph_id": 28, "text": "In Dirks, the Supreme Court also defined the concept of \"constructive insiders\", who are lawyers, investment bankers, and others who receive confidential information from a corporation while providing services to the corporation. Constructive insiders are also liable for insider trading violations if the corporation expects the information to remain confidential, since they acquire the fiduciary duties of the true insider.", "title": "United States law" }, { "paragraph_id": 29, "text": "The next expansion of insider trading liability came in SEC vs. Materia 745 F.2d 197 (2d Cir. 1984), the case that first introduced the misappropriation theory of liability for insider trading. Materia, a financial printing firm proofreader, and clearly not an insider by any definition, was found to have determined the identity of takeover targets based on proofreading tender offer documents in the course of his employment. After a two-week trial, the district court found him liable for insider trading, and the Second Circuit Court of Appeals affirmed holding that the theft of information from an employer, and the use of that information to purchase or sell securities in another entity, constituted a fraud in connection with the purchase or sale of a securities. The misappropriation theory of insider trading was born, and liability further expanded to encompass a larger group of outsiders.", "title": "United States law" }, { "paragraph_id": 30, "text": "In United States v. Carpenter (1986) the U.S. Supreme Court cited an earlier ruling while unanimously upholding mail and wire fraud convictions for a defendant who received his information from a journalist rather than from the company itself. The journalist R. Foster Winans was also convicted, on the grounds that he had misappropriated information belonging to his employer, The Wall Street Journal. In that widely publicized case, Winans traded in advance of \"Heard on the Street\" columns appearing in the Journal.", "title": "United States law" }, { "paragraph_id": 31, "text": "The Court stated in Carpenter: \"It is well established, as a general proposition, that a person who acquires special knowledge or information by virtue of a confidential or fiduciary relationship with another is not free to exploit that knowledge or information for his own personal benefit but must account to his principal for any profits derived therefrom.\"", "title": "United States law" }, { "paragraph_id": 32, "text": "However, in upholding the securities fraud (insider trading) convictions, the justices were evenly split.", "title": "United States law" }, { "paragraph_id": 33, "text": "In 1997, the U.S. Supreme Court adopted the misappropriation theory of insider trading in United States v. O'Hagan, 521 U.S. 642, 655 (1997). O'Hagan was a partner in a law firm representing Grand Metropolitan, while it was considering a tender offer for Pillsbury Company. O'Hagan used this inside information by buying call options on Pillsbury stock, resulting in profits of over $4.3 million. O'Hagan claimed that neither he nor his firm owed a fiduciary duty to Pillsbury, so he did not commit fraud by purchasing Pillsbury options.", "title": "United States law" }, { "paragraph_id": 34, "text": "The Court rejected O'Hagan's arguments and upheld his conviction.", "title": "United States law" }, { "paragraph_id": 35, "text": "The \"misappropriation theory\" holds that a person commits fraud \"in connection with\" a securities transaction and thereby violates 10(b) and Rule 10b-5, when he misappropriates confidential information for securities trading purposes, in breach of a duty owed to the source of the information. Under this theory, a fiduciary's undisclosed, self-serving use of a principal's information to purchase or sell securities, in breach of a duty of loyalty and confidentiality, defrauds the principal of the exclusive use of the information. In lieu of premising liability on a fiduciary relationship between company insider and purchaser or seller of the company's stock, the misappropriation theory premises liability on a fiduciary-turned-trader's deception of those who entrusted him with access to confidential information.", "title": "United States law" }, { "paragraph_id": 36, "text": "The Court specifically recognized that a corporation's information is its property: \"A company's confidential information ... qualifies as property to which the company has a right of exclusive use. The undisclosed misappropriation of such information in violation of a fiduciary duty ... constitutes fraud akin to embezzlement – the fraudulent appropriation to one's own use of the money or goods entrusted to one's care by another.\"", "title": "United States law" }, { "paragraph_id": 37, "text": "In 2000, the SEC enacted SEC Rule 10b5-1, which defined trading \"on the basis of\" inside information as any time a person trades while aware of material nonpublic information. It is no longer a defense for one to say that one would have made the trade anyway. The rule also created an affirmative defense for pre-planned trades.", "title": "United States law" }, { "paragraph_id": 38, "text": "In Morgan Stanley v. Skowron, 989 F. Supp. 2d 356 (S.D.N.Y. 2013), applying New York's faithless servant doctrine, the court held that a hedge fund's portfolio manager engaging in insider trading in violation of his company's code of conduct, which also required him to report his misconduct, must repay his employer the full $31 million his employer paid him as compensation during his period of faithlessness. The court called the insider trading the \"ultimate abuse of a portfolio manager's position\". The judge also wrote: \"In addition to exposing Morgan Stanley to government investigations and direct financial losses, Skowron's behavior damaged the firm's reputation, a valuable corporate asset.\"", "title": "United States law" }, { "paragraph_id": 39, "text": "In 2014, in the case of United States v. Newman, the United States Court of Appeals for the Second Circuit cited the Supreme Court's decision in Dirks, and ruled that for a \"tippee\" (a person who used information they received from an insider) to be guilty of insider trading, the tippee must have been aware not only that the information was insider information, but must also have been aware that the insider released the information for an improper purpose (such as a personal benefit). The Court concluded that the insider's breach of a fiduciary duty not to release confidential information—in the absence of an improper purpose on the part of the insider—is not enough to impose criminal liability on either the insider or the tippee.", "title": "United States law" }, { "paragraph_id": 40, "text": "In 2016, in the case of Salman v. United States, the U.S. Supreme Court held that the benefit a tipper must receive as predicate for an insider-trader prosecution of a tippee need not be pecuniary, and that giving a 'gift' of a tip to a family member is presumptively an act for the personal though intangible benefit of the tipper.", "title": "United States law" }, { "paragraph_id": 41, "text": "Members of the US Congress are not exempt from the laws that ban insider trading. Because they generally do not have a confidential relationship with the source of the information they receive, however, they do not meet the usual definition of an \"insider\". House of Representatives rules may however consider congressional insider trading unethical. A 2004 study found that stock sales and purchases by senators outperformed the market by 12.3% per year. Peter Schweizer points out several examples of insider trading by members of Congress, including action taken by Spencer Bachus following a private, behind-the-doors meeting on the evening of September 18, 2008 when Hank Paulson and Ben Bernanke informed members of Congress about the issues due to the financial crisis of 2007–2008, Bachus then shorted stocks the next morning and cashed in his profits within a week. Also attending the same meeting were Senator Dick Durbin and House Speaker John Boehner; the same day (trade effective the next day), Durbin sold mutual-fund shares worth $42,696, and reinvested it all with Warren Buffett. Also the same day (trade effective the next day), Boehner cashed out of an equity mutual fund.", "title": "United States law" }, { "paragraph_id": 42, "text": "In May 2007, a bill entitled the Stop Trading on Congressional Knowledge Act, or STOCK Act was introduced that would hold congressional and federal employees liable for stock trades they made using information they gained through their jobs and also regulate analysts or political intelligence firms that research government activities. The STOCK Act was enacted on April 4, 2012. As of 2021, in the approximately nine month period up to September 2021, Senate and House members disclosed 4,000 trades worth at least $315 million of stocks and bonds.", "title": "United States law" }, { "paragraph_id": 43, "text": "Some economists and legal scholars (such as Henry Manne, Milton Friedman, Thomas Sowell, Daniel Fischel, and Frank H. Easterbrook) have argued that laws against insider trading should be repealed. They claim that insider trading based on material nonpublic information benefits investors, in general, by more quickly introducing new information into the market.", "title": "Arguments for legalizing" }, { "paragraph_id": 44, "text": "Friedman, laureate of the Nobel Memorial Prize in Economics, said: \"You want more insider trading, not less. You want to give the people most likely to have knowledge about deficiencies of the company an incentive to make the public aware of that.\" Friedman did not believe that the trader should be required to make his trade known to the public, because the buying or selling pressure itself is information for the market.", "title": "Arguments for legalizing" }, { "paragraph_id": 45, "text": "Other critics argue that insider trading is a victimless act: a willing buyer and a willing seller agree to trade property that the seller rightfully owns, with no prior contract (according to this view) having been made between the parties to refrain from trading if there is asymmetric information. The Atlantic has described the process as \"arguably the closest thing that modern finance has to a victimless crime\".", "title": "Arguments for legalizing" }, { "paragraph_id": 46, "text": "Legalization advocates also question why \"trading\" where one party has more information than the other is legal in other markets, such as real estate, but not in the stock market. For example, if a geologist knows there is a high likelihood of the discovery of petroleum under Farmer Smith's land, he may be entitled to make Smith an offer for the land, and buy it, without first telling Farmer Smith of the geological data.", "title": "Arguments for legalizing" }, { "paragraph_id": 47, "text": "Advocates of legalization make free speech arguments. Punishment for communicating about a development pertinent to the next day's stock price might seem an act of censorship. If the information being conveyed is proprietary information and the corporate insider has contracted to not expose it, he has no more right to communicate it than he would to tell others about the company's confidential new product designs, formulas, or bank account passwords.", "title": "Arguments for legalizing" }, { "paragraph_id": 48, "text": "Some authors have used these arguments to propose legalizing insider trading on negative information (but not on positive information). Since negative information is often withheld from the market, trading on such information has a higher value for the market than trading on positive information.", "title": "Arguments for legalizing" }, { "paragraph_id": 49, "text": "There are very limited laws against \"insider trading\" in the commodities markets if, for no other reason than that the concept of an \"insider\" is not immediately analogous to commodities themselves (corn, wheat, steel, etc.). However, analogous activities such as front running are illegal under US commodity and futures trading laws. For example, a commodity broker can be charged with fraud for receiving a large purchase order from a client (one likely to affect the price of that commodity) and then purchasing that commodity before executing the client's order to benefit from the anticipated price increase.", "title": "Arguments for legalizing" }, { "paragraph_id": 50, "text": "The advent of the Internet has provided a forum for the commercialisation of trading on insider information. In 2016 a number of dark web sites were identified as marketplaces where such non-public information was bought and sold. At least one such site used bitcoins to avoid currency restrictions and to impede tracking. Such sites also provide a place for soliciting for corporate informants, where non-public information may be used for purposes other than stock trading.", "title": "Commercialisation" }, { "paragraph_id": 51, "text": "The US and the UK vary in the way the law is interpreted and applied with regard to insider trading. In the UK, the relevant laws are the Criminal Justice Act 1993, Part V, Schedule 1; the Financial Services and Markets Act 2000, which defines an offence of \"market abuse\"; and the European Union Regulation No 596/2014. The principle is that it is illegal to trade on the basis of market-sensitive information that is not generally known. This is a much broader scope that under U.S. law. The key differences from U.S. law are that no relationship to either the issuer of the security or the tipster is required; all that is required is that the guilty party traded (or caused trading) whilst having inside information, and there is no scienter requirement under UK law.", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 52, "text": "Japan enacted its first law against insider trading in 1988. Roderick Seeman said, \"Even today many Japanese do not understand why this is illegal. Indeed, previously it was regarded as common sense to make a profit from your knowledge.\"", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 53, "text": "In Malta the law follows the European broader scope model. The relevant statute is the Prevention of Financial Markets Abuse Act of 2005, as amended. Earlier acts included the Financial Markets Abuse Act in 2002, and the Insider Dealing and Market Abuse Act of 1994.", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 54, "text": "The International Organization of Securities Commissions (IOSCO) paper on the \"Objectives and Principles of Securities Regulation\" (updated to 2003) states that the three objectives of good securities market regulation are investor protection, ensuring that markets are fair, efficient and transparent, and reducing systemic risk.", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 55, "text": "The discussion of these \"Core Principles\" state that \"investor protection\" in this context means \"Investors should be protected from misleading, manipulative or fraudulent practices, including insider trading, front running or trading ahead of customers and the misuse of client assets.\" More than 85 percent of the world's securities and commodities market regulators are members of IOSCO and have signed on to these Core Principles.", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 56, "text": "The World Bank and International Monetary Fund now use the IOSCO Core Principles in reviewing the financial health of different country's regulatory systems as part of these organization's financial sector assessment program, so laws against insider trading based on non-public information are now expected by the international community. Enforcement of insider trading laws varies widely from country to country, but the vast majority of jurisdictions now outlaw the practice, at least in principle.", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 57, "text": "Larry Harris claims that differences in the effectiveness with which countries restrict insider trading help to explain the differences in executive compensation among those countries. The US, for example, has much higher CEO salaries than have Japan or Germany, where insider trading is less effectively restrained.", "title": "Legal differences among jurisdictions" }, { "paragraph_id": 58, "text": "In 2014, the European Union (EU) adopted legislation (Criminal Sanctions for Market Abuse Directive) that harmonised criminal sanctions for insider dealing. All EU Member States agreed to introduce maximum prison sentences of at least four years for serious cases of market manipulation and insider dealing, and at least two years for improper disclosure of insider information.", "title": "By nation" }, { "paragraph_id": 59, "text": "The current Australian legislation arose out of the report of a 1989 parliamentary committee report which recommended removal of the requirement that the trader be 'connected' with the body corporate. This may have weakened the importance of the fiduciary duty rationale and possibly brought new potential offenders within its ambit. In Australia if a person possesses inside information and knows, or ought reasonably to know, that the information is not generally available and is materially price sensitive then the insider must not trade. Nor must she or he procure another to trade and must not tip another. Information will be considered generally available if it consists of readily observable matter or it has been made known to common investors and a reasonable period for it to be disseminated among such investors has elapsed.", "title": "By nation" }, { "paragraph_id": 60, "text": "In 2009, a journalist in Nettavisen (Thomas Gulbrandsen) was sentenced to 4 months in prison for insider trading.", "title": "By nation" }, { "paragraph_id": 61, "text": "The longest prison sentence in a Norwegian trial where the main charge was insider trading, was for eight years (two suspended) when Alain Angelil was convicted in a district court on December 9, 2011.", "title": "By nation" }, { "paragraph_id": 62, "text": "Although insider trading in the UK has been illegal since 1980, it proved difficult to successfully prosecute individuals accused of insider trading. There were a number of notorious cases where individuals were able to escape prosecution. Instead the UK regulators relied on a series of fines to punish market abuses.", "title": "By nation" }, { "paragraph_id": 63, "text": "These fines were widely perceived as an ineffective deterrent, and there was a statement of intent by the UK regulator (the Financial Services Authority) to use its powers to enforce the legislation (specifically the Financial Services and Markets Act 2000). Between 2009 and 2012 the FSA secured 14 convictions in relation to insider dealing.", "title": "By nation" }, { "paragraph_id": 64, "text": "Anil Kumar, a senior partner at management consulting firm McKinsey & Company, pleaded guilty in 2010 to insider trading in a \"descent from the pinnacle of the business world\".", "title": "By nation" }, { "paragraph_id": 65, "text": "Chip Skowron, a hedge fund co-portfolio manager of FrontPoint Partners LLC's health care funds, was convicted of insider trading in 2011, for which he served five years in prison. He had been tipped off by a consultant to a company that the company was about to make a negative announcement regarding its clinical trial for a drug. At first Skowron denied the charges against him, and his defense attorney said he would plead not guilty, saying \"We look forward to responding to the allegations more fully in court at the appropriate time\". However, after the consultant charged with tipping him off pleaded guilty, he changed his position, and admitted his guilt.", "title": "By nation" }, { "paragraph_id": 66, "text": "Rajat Gupta, who had been managing partner of McKinsey & Co. and a director at Goldman Sachs Group Inc. and Procter & Gamble Co., was convicted by a federal jury in 2012 and sentence to two years in prison for leaking inside information to hedge fund manager Raj Rajaratnam who was sentenced to 11 years in prison. The case was prosecuted by the office of United States Attorney for the Southern District of New York Preet Bharara.", "title": "By nation" }, { "paragraph_id": 67, "text": "Mathew Martoma, former hedge fund trader and portfolio manager at S.A.C. Capital Advisors, was accused of generating possibly the largest single insider trading transaction profit in history at a value of $276 million. He was convicted in February 2014, and is serving a nine-year prison sentence.", "title": "By nation" }, { "paragraph_id": 68, "text": "With the guilty plea by Perkins Hixon in 2014 for insider trading from 2010 to 2013 while at Evercore Partners, Bharara said in a press release that 250 defendants whom his office had charged since August 2009 had now been convicted.", "title": "By nation" }, { "paragraph_id": 69, "text": "On December 10, 2014, a federal appeals court overturned the insider trading convictions of two former hedge fund traders, Todd Newman and Anthony Chiasson, based on the \"erroneous\" instructions given to jurors by the trial judge. The decision was expected to affect the appeal of the separate insider-trading conviction of former SAC Capital portfolio manager Michael Steinberg and the U.S. Attorney and the SEC in 2015 did drop their cases against Steinberg and others.", "title": "By nation" }, { "paragraph_id": 70, "text": "In 2016, Sean Stewart, a former managing director at Perella Weinberg Partners LP and vice president at JPMorgan Chase, was convicted on allegations he tipped his father on pending health-care deals. The father, Robert Stewart, previously had pleaded guilty but didn't testify during his son's trial. It was argued that by way of compensation for the tip, the father had paid more than $10,000 for Sean's wedding photographer.", "title": "By nation" }, { "paragraph_id": 71, "text": "In 2017, Billy Walters, Las Vegas sports bettor, was convicted of making $40 million on private information of Dallas-based dairy processing company Dean Foods, and sentenced to five years in prison. Walters's source, company director Thomas C. Davis employing a prepaid cell phone and sometimes the code words \"Dallas Cowboys\" for Dean Foods, helped him from 2008 to 2014 realize profits and avoid losses in the stock, the federal jury found. Golfer Phil Mickelson \"was also mentioned during the trial as someone who had traded in Dean Foods shares and once owed nearly $2 million in gambling debts to\" Walters. Mickelson \"made roughly $1 million trading Dean Foods shares; he agreed to forfeit those profits in a related civil case brought by the Securities and Exchange Commission\". Walters appealed the verdict, but in December 2018 his conviction was upheld by the 2nd U.S. Circuit Court of Appeals in Manhattan.", "title": "By nation" }, { "paragraph_id": 72, "text": "In 2018, David Blaszczak, the \"king of political intelligence,\" Theodore Huber and Robert Olan, two partners at hedge fund Deerfield Management, and Christopher Worrall, an employee at the Centers for Medicare and Medicaid Services (CMS), were convicted for insider trading by the U.S. Attorney's Office in the Southern District of New York. Worrall leaked confidential government information that he stole from CMS to Blaszczak, and Blaszczak passed that information to Huber and Olan, who made $7 million trading securities. The convictions were upheld in 2019 by the Second Circuit, U.S. Court of Appeals in Manhattan; that opinion was vacated by the Supreme Court in 2021, and the Second Circuit is now reconsidering its decision.", "title": "By nation" }, { "paragraph_id": 73, "text": "In 2021, Puneet Dikshit, a partner at McKinsey, pled guilty to trading on inside information that he had access to while advising Goldman Sachs on its acquisition of GreenSky, Inc. Dikshit was the third McKinsey partner to be convicted of insider trading in the Southern District of New York.", "title": "By nation" }, { "paragraph_id": 74, "text": "In 2023, Terren Peizer was charged with insider trading by the SEC, which alleged that he sold $20 million of Ontrak Inc. stock while he was in possession of material nonpublic negative information. Peizer was the CEO and chairman of Ontrak. In addition, the U.S. Department of Justice announced criminal charges of securities fraud against Peizer, charging that thereby he had avoided $12 million in losses; he was arrested. The case is assigned to the U.S. District Court for the Central District of California before U.S. District Judge Dale S. Fischer. If convicted, Peizer could face up to 65 years in prison.", "title": "By nation" }, { "paragraph_id": 75, "text": "In 2008, police uncovered an insider trading conspiracy involving Bay Street and Wall Street lawyer Gil Cornblum who had worked at Sullivan & Cromwell and was working at Dorsey & Whitney, and a former lawyer, Stan Grmovsek, who were found to have gained over $10 million in illegal profits over a 14-year span. Cornblum committed suicide by jumping from a bridge as he was under investigation and shortly before he was to be arrested but before criminal charges were laid against him, one day before his alleged co-conspirator Grmovsek pled guilty. Grmovsek pleaded guilty to insider trading and was sentenced to 39 months in prison. This was the longest term ever imposed for insider trading in Canada. These crimes were explored in Mark Coakley's 2011 non-fiction book, Tip and Trade.", "title": "By nation" }, { "paragraph_id": 76, "text": "The U.S. SEC alleged that in 2009 Kuwaiti trader Hazem Al-Braikan engaged in insider trading after misleading the public about possible takeover bids for two companies. Three days after Al-Braikan was sued by the SEC, he was found dead of a gunshot wound to the head in his home in Kuwait City on July 26, 2009, in what Kuwaiti police called a suicide. The SEC later reached a $6.5 million settlement of civil insider trading charges, with his estate and others.", "title": "By nation" }, { "paragraph_id": 77, "text": "The majority of shares in China before 2005 were non-tradeable shares that were not sold on the stock exchange publicly but privately. To make shares more accessible, the China Securities Regulation Commission (CSRC) required the companies to convert the non-tradeable shares into tradeable shares. There was a deadline for companies to convert their shares and the deadline was short, due to this there was a massive amount of exchanges and in the midst of these exchanges many people committed insider trading knowing that the selling of these shares would affect prices. Chinese people did not fear insider trading as much as one may in the United States because there is no possibility of imprisonment. Punishment may include monetary fees or temporary relieving from a position in the company. The Chinese do not view insider trading as a crime worth prison time because generally the person has a clean record and a path of success with references to deter them from being viewed as a criminal. On October 1, 2015, Chinese fund manager Xu Xiang was arrested due to insider trading.", "title": "By nation" }, { "paragraph_id": 78, "text": "Insider trading in India is an offense according to Sections 12A, 15G of the Securities and Exchange Board of India Act, 1992. Insider trading is when one with access to non-public, price-sensitive information about the securities of the company subscribes, buys, sells, or deals, or agrees to do so or counsels another to do so as principal or agent. Price-sensitive information is information that materially affects the value of the securities. The penalty for insider trading is imprisonment, which may extend to five years, and a minimum of five lakh rupees (500,000) to 25 crore rupees (250 million) or three times the profit made, whichever is higher.", "title": "By nation" }, { "paragraph_id": 79, "text": "The Wall Street Journal, in a 2014 article entitled \"Why It's Hard to Catch India's Insider Trading\", said that despite a widespread belief that insider trading takes place on a regular basis in India, there were few examples of insider traders being prosecuted in India. One former top regulator said that in India insider trading is deeply rooted and especially rampant because regulators do not have the tools to address it. In the few cases where prosecution has taken place, cases have sometimes taken more than a decade to reach trial, and punishments have been light; and despite SEBI by law having the ability to demand penalties of up to $4 million, the few fines that were levied for insider trading have usually been under $200,000.", "title": "By nation" }, { "paragraph_id": 80, "text": "Under Republic Act 8799 or the Securities Regulation Code, insider trading in the Philippines is illegal.", "title": "By nation" }, { "paragraph_id": 81, "text": "The practice of insider trading is an illegal act under Brazilian law, since it constitutes unfair behavior that threatens the security and equality of legal conditions in the market. Since 2001, the practice is also considered a crime. Law 6,385/1976, as amended by Law 10,303/2001, provided for Article 27-D, which typifies the conduct of \"Using relevant information not yet disclosed to the market, of which he is aware and from which he must maintain secrecy, capable of providing, for himself or for others, undue advantage, through trading, on his own behalf or on behalf of a third party, with securities: Penalty - imprisonment, from 1 (one) to 5 (five) years, and a fine of up to 3 (three) times the amount of the illicit advantage obtained as a result of the crime.\"", "title": "By nation" }, { "paragraph_id": 82, "text": "The first conviction handed down in Brazil for the practice of the offense of \"misuse of privileged information\" occurred in 2011, by federal judge Marcelo Costenaro Cavali, of the Sixth Criminal Court of São Paulo. It is the case of the Sadia-Perdigão merger. The former Director of Finance and Investor Relations, Luiz Gonzaga Murat Júnior, was sentenced to one year and nine months in prison in an open regime, replaceable by community service, and the inability to exercise the position of administrator or fiscal councilor of a publicly traded company for the time he serves his sentence, in addition to a fine of R$349,711.53. The then member of the board of directors Romano Ancelmo Fontana Filho was sentenced to prison for one year and five months in an open regime, also replaceable by community service, in addition to not being able to exercise the position of administrator or fiscal councilor of a publicly-held company. He was also fined R$374,940.52.", "title": "By nation" }, { "paragraph_id": 83, "text": "General information", "title": "External links" }, { "paragraph_id": 84, "text": "Articles and opinions", "title": "External links" }, { "paragraph_id": 85, "text": "Data on insider trading", "title": "External links" } ]
Insider trading is the trading of a public company's stock or other securities based on material, nonpublic information about the company. In various countries, some kinds of trading based on insider information are illegal. This is because it is seen as unfair to other investors who do not have access to the information, as the investor with insider information could potentially make larger profits than a typical investor could make. The rules governing insider trading are complex and vary significantly from country to country. The extent of enforcement also varies from one country to another. The definition of insider in one jurisdiction can be broad and may cover not only insiders themselves but also any persons related to them, such as brokers, associates, and even family members. A person who becomes aware of non-public information and trades on that basis may be guilty of a crime. Trading by specific insiders, such as employees, is commonly permitted as long as it does not rely on material information, not in the public domain. Many jurisdictions require that such trading be reported so that the transactions can be monitored. In the United States and several other jurisdictions, trading conducted by corporate officers, key employees, directors, or significant shareholders must be reported to the regulator or publicly disclosed, usually within a few business days of the trade. In these cases, insiders in the United States are required to file Form 4 with the U.S. Securities and Exchange Commission (SEC) when buying or selling shares of their own companies. The authors of one study claim that illegal insider trading raises the cost of capital for securities issuers, thus decreasing overall economic growth. Some economists, such as Henry Manne, argued that insider trading should be allowed and could, in fact, benefit markets. There has long been "considerable academic debate" among business and legal scholars over whether or not insider trading should be illegal. Several arguments against outlawing insider trading have been identified: for example, although insider trading is illegal, most insider trading is never detected by law enforcement, and thus the illegality of insider trading might give the public the potentially misleading impression that "stock market trading is an unrigged game that anyone can play." Some legal analysis has questioned whether insider trading actually harms anyone in the legal sense, since some have questioned whether insider trading causes anyone to suffer an actual "loss" and whether anyone who suffers a loss is owed an actual legal duty by the insiders in question.
2001-12-14T10:44:25Z
2023-12-26T23:30:38Z
[ "Template:Reflist", "Template:Cite news", "Template:Cite journal", "Template:Cite magazine", "Template:Conflict of interest", "Template:Short description", "Template:Criminal law", "Template:Fact", "Template:Bare URL PDF", "Template:Spoken Wikipedia", "Template:Spaced ndash", "Template:Authority control", "Template:Irrelevant citation", "Template:Cmn", "Template:Cite web", "Template:Sfnp", "Template:Citation needed", "Template:See also", "Template:Webarchive", "Template:Cite book", "Template:Citation", "Template:Redirect", "Template:Criminology and penology", "Template:Rp", "Template:ISBN", "Template:Wiktionary" ]
https://en.wikipedia.org/wiki/Insider_trading
15,369
International Brigades
The International Brigades (Spanish: Brigadas Internacionales) were military units set up by the Communist International to assist the Popular Front government of the Second Spanish Republic during the Spanish Civil War. The organization existed for two years, from 1936 until 1938. It is estimated that during the entire war, between 40,000 and 59,000 members served in the International Brigades, including some 10,000 who died in combat. Beyond the Spanish Civil War, "International Brigades" is also sometimes used interchangeably with the term foreign legion in reference to military units comprising foreigners who volunteer to fight in the military of another state, often in times of war. The headquarters of the brigade was located at the Gran Hotel, Albacete, Castilla-La Mancha. They participated in the battles of Madrid, Jarama, Guadalajara, Brunete, Belchite, Teruel, Aragon and the Ebro. Most of these ended in defeat. For the last year of its existence, the International Brigades were integrated into the Spanish Republican Army as part of the Spanish Foreign Legion. The organisation was dissolved on 23 September 1938 by Spanish Prime Minister Juan Negrín in a vain attempt to get more support from the liberal democracies on the Non-Intervention Committee. The International Brigades were strongly supported by the Comintern and represented the Soviet Union's commitment to assisting the Spanish Republic (with arms, logistics, military advisers and the NKVD), just as Portugal, Fascist Italy, and Nazi Germany were assisting the opposing Nationalist insurgency. The largest number of volunteers came from France (where the French Communist Party had many members) and communist exiles from Italy and Germany. Many Jews were part of the brigades, being particularly numerous within the volunteers coming from the United States, Poland, France, England and Argentina. Republican volunteers who were opposed to Stalinism did not join the Brigades but instead enlisted in the separate Popular Front, the POUM (formed from Trotskyist, Bukharinist, and other anti-Stalinist groups, which did not separate Spaniards and foreign volunteers), or anarcho-syndicalist groups such as the Durruti Column, the IWA, and the CNT. Using foreign communist parties to recruit volunteers for Spain was first proposed in the Soviet Union in September 1936—apparently at the suggestion of Maurice Thorez—by Willi Münzenberg, chief of Comintern propaganda for Western Europe. As a security measure, non-communist volunteers would first be interviewed by an NKVD agent. By the end of September, the Italian and French Communist Parties had decided to set up a column. Luigi Longo, ex-leader of the Italian Communist Youth, was charged to make the necessary arrangements with the Spanish government. The Soviet Ministry of Defense also helped, since they had an experience of dealing with corps of international volunteers during the Russian Civil War. The idea was initially opposed by Largo Caballero, but after the first setbacks of the war, he changed his mind and finally agreed to the operation on 22 October. However, the Soviet Union did not withdraw from the Non-Intervention Committee, probably to avoid diplomatic conflict with France and the United Kingdom. The main recruitment center was in Paris, under the supervision of Soviet colonel Karol "Walter" Świerczewski. On 17 October 1936, an open letter by Joseph Stalin to José Díaz was published in Mundo Obrero, arguing that victory for the Spanish second republic was a matter not only for Spaniards but also for the whole of "progressive humanity"; in short order, communist activists joined with moderate socialist and liberal groups to form anti-fascist "popular front" militias in several countries, most of them under the control of or influenced by the Comintern. Entry to Spain was arranged for volunteers, for instance, a Yugoslav, Josip Broz, who would become famous as Marshal Tito, was in Paris to provide assistance, money, and passports for volunteers from Eastern Europe (including numerous Yugoslav volunteers in the Spanish Civil War). Volunteers were sent by train or ship from France to Spain, and sent to the base at Albacete. Many of them also went by themselves to Spain. The volunteers were under no contract, nor defined engagement period, which would later prove a problem. Also, many Italians, Germans, and people from other countries joined the movement, with the idea that combat in Spain was the first step to restore democracy or advance a revolutionary cause in their own country. There were also many unemployed workers (especially from France), and adventurers. Finally, some 500 communists who had been exiled to Russia were sent to Spain (among them, experienced military leaders from the First World War like "Kléber" Stern, "Gomez" Zaisser, "Lukacs" Zalka and "Gal" Galicz, who would prove invaluable in combat). The operation was met with enthusiasm by communists, but by anarchists with skepticism, at best. At first, the anarchists, who controlled the borders with France, were told to refuse communist volunteers, but reluctantly allowed their passage after protests. Keith Scott Watson, a journalist who fought alongside Esmond Romilly at Cerro de los Ángeles and who later “resigned” from the Thälmann Battalion, describes in his memoirs how he was detained and interrogated by Anarchist border guards before eventually being allowed into the country. A group of 500 volunteers (mainly French, with a few exiled Poles and Germans) arrived in Albacete on 14 October 1936. They were met by international volunteers who had already been fighting in Spain: Germans from the Thälmann Battalion, Italians from the Centuria Gastone Sozzi and French from the Commune de Paris Battalion. Among them was the poet John Cornford, who had travelled down through France and Spain with a group of fellow intellectuals and artists including John Sommerfield, Bernard Knox and Jan Kurzke, all of whom left detailed memoirs of their battle experiences. On 30 May 1937, the Spanish liner Ciudad de Barcelona, carrying 200–250 volunteers from Marseille to Spain, was torpedoed by a Nationalist submarine off the coast of Malgrat de Mar. The ship sunk and up to 65 volunteers are estimated to have drowned. Albacete soon became the International Brigades headquarters and its main depot. It was run by a troika of Comintern heavyweights: André Marty was commander; Luigi Longo (Gallo) was Inspector-General; and Giuseppe Di Vittorio (Nicoletti) was chief political commissar. There were many Jewish volunteers amongst the brigaders - about a quarter of the total. A Jewish company was formed within the Polish battalion that was named after Naftali Botwin, a young Jewish communist killed in Poland in 1925. The French Communist Party provided uniforms for the Brigades. They were organized into mixed brigades, the basic military unit of the Republican People's Army. Discipline was severe. For several weeks, the Brigades were locked in their base while their strict military training was underway. The Battle of Madrid was a major success for the Republic, and staved off the prospect of a rapid defeat at the hands of Francisco Franco's forces. The role of the International Brigades in this victory was generally recognized but was exaggerated by Comintern propaganda so that the outside world heard only of their victories and not those of Spanish units. So successful was such propaganda that the British Ambassador, Sir Henry Chilton, declared that there were no Spaniards in the army which had defended Madrid. The International Brigade forces that fought in Madrid arrived after another successful Republican fighting. Of the 40,000 Republican troops in the city, the foreign troops numbered less than 3,000. Even though the International Brigades did not win the battle by themselves, nor significantly change the situation, they certainly did provide an example by their determined fighting and improved the morale of the population by demonstrating the concern of other nations in the fight. Many of the older members of the International Brigades provided valuable combat experience, having fought during the First World War (Spain remained neutral in 1914–1918) and the Irish War of Independence (some had fought in the British Army while others had fought in the Irish Republican Army (IRA)). One of the strategic positions in Madrid was the Casa de Campo. There the Nationalist troops were Moroccans, commanded by General José Enrique Varela. They were stopped by III and IV Brigades of the Spanish Republican Army. On 9 November 1936, the XI International Brigade – comprising 1,900 men from the Edgar André Battalion, the Commune de Paris Battalion and the Dabrowski Battalion, together with a British machine-gun company — took up position at the Casa de Campo. In the evening, its commander, General Kléber, launched an assault on the Nationalist positions. This lasted for the whole night and part of the next morning. At the end of the fight, the Nationalist troops had been forced to retreat, abandoning all hopes of a direct assault on Madrid by Casa de Campo, while the XIth Brigade had lost a third of its personnel. On 13 November, the 1,550-man strong XII International Brigade, made up of the Thälmann Battalion, the Garibaldi Battalion and the André Marty Battalion, deployed. Commanded by General "Lukacs", they assaulted Nationalist positions on the high ground of Cerro de Los Angeles. As a result of language and communication problems, command issues, lack of rest, poor coordination with armored units, and insufficient artillery support, the attack failed. On 19 November, the anarchist militias were forced to retreat, and Nationalist troops — Moroccans and Spanish Foreign Legionnaires, covered by the Nazi Condor Legion — captured a foothold in the University City. The 11th Brigade was sent to drive the Nationalists out of the University City. The battle was extremely bloody, a mix of artillery and aerial bombardment, with bayonet and grenade fights, room by room. Anarchist leader Buenaventura Durruti was shot there on 19 November 1936 and died the next day. The battle in the university went on until three-quarters of the University City was under Nationalist control. Both sides then started setting up trenches and fortifications. It was then clear that any assault from either side would be far too costly; the Nationalist leaders had to renounce the idea of a direct assault on Madrid, and prepare for a siege of the capital. On 13 December 1936, 18,000 nationalist troops attempted an attack to close the encirclement of Madrid at Guadarrama — an engagement known as the Battle of the Corunna Road. The Republicans sent in a Soviet armored unit, under General Dmitry Pavlov, and both XI and XII International Brigades. Violent combat followed, and they stopped the Nationalist advance. An attack was then launched by the Republic on the Córdoba front. The battle ended in a form of stalemate; a communique was issued, saying: "During the day the advance continued without the loss of any territory." Poets Ralph Winston Fox and John Cornford were killed. Eventually, the Nationalists advanced, taking the hydroelectric station at El Campo. André Marty accused the commander of the Marseillaise Battalion, Gaston Delasalle, of espionage and treason and had him executed. (It is doubtful that Delasalle would have been a spy for Francisco Franco; he was denounced by his second-in-command, André Heussler, who was subsequently executed for treason during World War II by the French Resistance.) Further Nationalist attempts after Christmas to encircle Madrid met with failure, but not without extremely violent combat. On 6 January 1937, the Thälmann Battalion arrived at Las Rozas and held its positions until it was destroyed as a fighting force. On 9 January, only 10 km had been lost to the Nationalists, when the XIII International Brigade and XIV International Brigade and the 1st British Company, arrived in Madrid. Violent Republican assaults were launched in an attempt to retake the land, with little success. On 15 January, trenches and fortifications were built by both sides, resulting in a stalemate. The Nationalists did not take Madrid until the very end of the war, in March 1939, when they marched in unopposed. There were some pockets of resistance during the subsequent months. On 6 February 1937, following the fall of Málaga, the nationalists launched an attack on the Madrid–Andalusia road, south of Madrid. The Nationalists quickly advanced on the little town of Ciempozuelos, held by the XV International Brigade. was composed of the British Battalion (British Commonwealth and Irish), the Dimitrov Battalion (miscellaneous Balkan nationalities), the Sixth February Battalion (Belgians and French), the Canadian Mackenzie-Papineau Battalion and the Abraham Lincoln Brigade. An independent 80-men-strong (mainly) Irish unit, known afterward as the Connolly Column, also fought. Battalions were rarely composed entirely of one nationality, rather they were, for the most part, a mix of many. On 11 February 1937, a Nationalist brigade launched a surprise attack on the André Marty Battalion (XIV International Brigade), killing its sentries silently and crossing the Jarama. The Garibaldi Battalion stopped the advance with heavy fire. At another point, the same tactic allowed the Nationalists to move their troops across the river. On 12 February, the British Battalion, XV International Brigade took the brunt of the attack, remaining under heavy fire for seven hours. The position became known as "Suicide Hill". At the end of the day, only 225 of the 600 members of the British battalion remained. One company was captured by ruse, when Nationalists advanced among their ranks singing The Internationale. On 17 February, the Republican Army counter attacked. On 23 and 27 February, the International Brigades were engaged, but with little success. The Lincoln Battalion was put under great pressure, with no artillery support. It suffered 120 killed and 175 wounded. Amongst the dead was the Irish poet Charles Donnelly and Leo Greene. There were heavy casualties on both sides, and although "both claimed victory ... both suffered defeats". The battle resulted in a stalemate, with both sides digging in and creating elaborate trench systems. On 22 February 1937, the League of Nations Non-Intervention Committee ban on foreign volunteers went into effect. After the failed assault on the Jarama, the Nationalists attempted another assault on Madrid, this time from the northeast. The objective was the town of Guadalajara, 50 km from Madrid. The whole Italian expeditionary corps — 35,000 men, with 80 battle tanks and 200 field artillery — was deployed, as Benito Mussolini wanted the victory to be credited to Italy. On 9 March 1937, the Italians made a breach in the Republican lines but did not properly exploit the advance. However, the rest of the Nationalist army was advancing, and the situation appeared critical for the Republicans. A formation drawn from the best available units of the Republican army, including the XI and XII International Brigades, was quickly assembled. At dawn on 10 March, the Nationalists closed in, and by noon, the Garibaldi Battalion counterattacked. Some confusion arose from the fact that the sides were not aware of each other's movements, and that both sides spoke Italian; this resulted in scouts from both sides exchanging information without realizing they were enemies. The Republican lines advanced and made contact with XI International Brigade. Nationalist tanks were shot at and infantry patrols came into action. On 11 March, the Nationalist army broke the front of the Republican army. The Thälmann Battalion suffered heavy losses but succeeded in holding the Trijueque–Torija road. The Garibaldi also held its positions. On 12 March, Republican planes and tanks attacked. The Thälmann Battalion attacked Trijuete in a bayonet charge and re-took the town, capturing numerous prisoners. The International Brigades also saw combat in the Battle of Teruel in January 1938. The 35th International Division suffered heavily in this battle from aerial bombardment as well as shortages of food, winter clothing, and ammunition. The XIV International Brigade fought in the Battle of Ebro in July 1938, the last Republican offensive of the war. Existing primary sources provide conflicting information as to the number of brigadiers killed; a report of the IB Albacete staff from late March 1938 claimed 4,575 KIA, an internal Soviet communication to Moscow by an NKVD major Semyon Gendin from late July 1938 claimed 3,615 KIA, while the prime minister Juan Negrín in his farewell address in Barcelona of October 28, 1938, mentioned 5,000 fallen. Also, in historiography there is no agreement as to fatal casualties. One exact figure offered is 9,934; it was calculated in the mid-1970s and is at times repeated until today. The highest estimate identified is 15,000 KIA. Many scholars prefer 10,000, also in recently published works. The popular Osprey series claims there were at least 7,800 killed. However, other authors provide estimates that point rather to the range from 6,100 to 6,500. In some non-scholarly publications the number is given as 4,900. The above figures include brigadiers killed in action, these who died of wounds later or those who were executed as POWs. They do not include brigadiers who were executed by their own side, the figure that some claim might have been 500; they also do not include victims of accidents (self-shooting, traffic, drownings etc) or these who perished due to health problems (illness, frostbiten, poisoning etc). The total number of casualties is given as 48,909. It includes killed, missing and wounded, though probably contains numerous duplicated cases, as one individual might have suffered wounds a few times. The ratio of KIA to all IB combatants as calculated by historians might differ even more as it depends not only on estimates as to the number of killed, but also on estimates as to the total number of volunteers. Some sources suggest the figure of 8.3%, some authors claim 15%, others opt for 16.7%, prefer 24.7% or endorse the ratio of 28.6%; a single author arrived even at 33%. In comparison, in shock units used by the Nationalists, though they were not entirely comparable, the ratio was 11.3% for the Carlist requetés and 14.6% for the Moroccan regulares. The overall percentage of killed in action in armies of both sides is estimated at some 7%. Estimates of KIA ratio for major national contingents differs enormously and often bear no reasonable relation to the overall KIA ratio, calculated for the Brigades. For the French (including French-speaking Belgians and Swiss) the figures range between 12% and 18%; for the Italians between 18% and 20%; for the British between 16% and 22%; for the Americans between 13% and 32%; for the Germans (including Austrians and German-speaking Swiss) between 22% and 40%; for the Yugoslavs between 35% and 50%, for the Canadians between 43% and 57%; for the Poles (including Ukrainians, Jews, Belarusians) between 30% and 62%. Among smaller contingents, the KIA ratio calculated appears to be 10% for the Cubans, 17% for the Czechoslovaks, 18% for the Austrians, 21% for the Balts (Estonians, Latvians, Lithuanians), 21-25% for the Swiss, 31% for the Finns, 13%-33% for the Greeks, 23-35% for the Swedes, 40% for the Danes, and 44% for the Norwegians. In October 1938, at the height of the Battle of the Ebro, the Non-Intervention Committee demanded the withdrawal of the International Brigades. The Republican government of Juan Negrín announced the decision in the League of Nations on 21 September 1938. The disbandment was part of an ill-advised effort to get the Nationalists' foreign backers to withdraw their troops and to persuade the Western democracies such as France and Britain to end their arms embargo on the Republic. By this time there were about an estimated 10,000 foreign volunteers still serving in Spain for the Republican side, and about 50,000 foreign conscripts for the Nationalists (excluding another 30,000 Moroccans). Perhaps half of the International Brigadistas were exiles or refugees from Nazi Germany, Fascist Italy or other countries, such as Hungary, which had authoritarian right-wing governments at the time. These men could not safely return home, and some were instead given honorary Spanish citizenship and integrated into Spanish units of the Popular Army. The remainder were repatriated to their own countries. The Belgian and Dutch volunteers lost their citizenship because they had served in a foreign army. The first brigades were composed mostly of French, Belgian, Italian, and German volunteers, backed by a sizeable contingent of Polish miners from Northern France and Belgium. The XIth, XIIth and XIIIth were the first brigades formed. Later, the XIVth and XVth Brigades were raised, mixing experienced soldiers with new volunteers. Smaller Brigades — the 86th, 129th and 150th - were formed in late 1937 and 1938, mostly for temporary tactical reasons. About 32,000 foreigners volunteered to defend the Spanish Republic, the vast majority of them with the International Brigades. Many were veterans of World War I. Their early engagements in 1936 during the Siege of Madrid amply demonstrated their military and propaganda value. The international volunteers were mainly socialists, communists, or others willing to accept communist authority, and a high proportion were Jewish. Some were involved in the Barcelona May Days fighting against leftist opponents of the Communists: the Workers' Party of Marxist Unification (POUM) (Partido Obrero de Unificación Marxista, an anti-Stalinist Marxist party) and the anarchist CNT (CNT, Confederación Nacional del Trabajo) and FAI (FAI, Iberian Anarchist Federation), who had strong support in Catalonia. These libertarian groups attracted fewer foreign volunteers. To simplify communication, the battalions usually concentrated on people of the same nationality or language group. The battalions were often (formally, at least) named after inspirational people or events. From spring 1937 onwards, many battalions contained one Spanish volunteer company of about 150 men. Later in the war, military discipline tightened and learning Spanish became mandatory. By decree of 23 September 1937, the International Brigades formally became units of the Spanish Foreign Legion. This made them subject to the Spanish Code of Military Justice. However, the Spanish Foreign Legion itself sided with the Nationalists throughout the coup and the civil war. The same decree also specified that non-Spanish officers in the Brigades should not exceed Spanish ones by more than 50 percent. After the Civil War was eventually won by the Nationalists, the brigaders were initially on the "wrong side" of history, especially as most of their home countries had right-wing governments (in France, for instance, the Popular Front was not in power anymore). However, since most of these countries soon found themselves at war with the very powers which had been supporting the Nationalists, the brigadistas gained some prestige as the first guard of the democracies, as having foreseen the danger of fascism and gone to fight it. Retrospectively, it was clear that the war in Spain was as much a precursor of the Second World War as a Spanish civil war. Some glory therefore accrued to the volunteers (a great many of the survivors also fought during World War II), but this soon faded in the fear that it would promote communism by association. An exception is among some left-wingers, for example many anarchists. Among these, the Brigades, or at least their leadership, are criticized for their role in suppressing the Spanish Revolution. An example of a modern work that promotes this view is Ken Loach's film Land and Freedom. A well-known contemporary account of the Spanish Civil War which also takes this view is George Orwell's book Homage to Catalonia. Germany was undivided until after the Second World War. At that time, the new communist state, the German Democratic Republic, began to create a national identity which was separate from and antithetical to the former Nazi Germany. The Spanish Civil War, and especially the role of the International Brigades, became a substantial part of East Germany's memorial rituals because of the substantial numbers of German communists who had served in the brigades. These showcased a commitment by many Germans to antifascism at a time when Germany and Nazism were often conflated. Survivors of the Mackenzie-Papineau Battalion were often investigated by the Royal Canadian Mounted Police and denied employment when they returned to Canada. Some were prevented from serving in the military during the Second World War due to "political unreliability". In 1995 a monument to veterans of the war was built near Ontario's provincial parliament. On 12 February 2000, a bronze statue "The Spirit of the Republic" by sculptor Jack Harman, based on an original poster from the Spanish Republic, was placed on the grounds of the British Columbia Legislature. In 2001, the few remaining Canadian veterans of the Spanish Civil War dedicated a monument to Canadian members of the International Brigades in Ottawa's Green Island Park. In line with the 1920 legislation, Polish citizens who volunteered to the IB were automatically stripped of citizenship as individuals who without formal approval served in foreign armed forces. Following republican defeat the combatants recruited in France and Belgium returned there. Among the others some served in pro-Communist partisan units in the German-occupied Poland and some made it to the USSR and served in the pro-Communist Polish army raised there. In the Communist Poland the IB combatants – referred to as "Dąbrowszczacy" - were granted veteran rights, but their fate differed depending upon political circumstances. Following some early exaltation in 1945-1949 they were later approached somewhat cautiously. There were cases of assuming high positions in administration and especially in security, but there were also cases of deposition, arrest and prison on trumped-up charges of political conspiracy; these were released in the mid-1950s. Though from the onset Polish engagement in IB was hailed as "working class taking to arms against Fascism", the most intense idolization took place between the mid-1950s and the mid-1960s, with a spate of publications, schools and streets named after "Dąbrowszczacy". However, an antisemitic turn in the late 1960s again produced de-emphasizing of IB volunteers, many of whom left Poland. Until the end of Communist rule the IB episode was duly acknowledged, but propaganda related was a far cry from veneration reserved for wartime Communist partisans or the USSR-raised Polish army. Despite some efforts on part of IB combatants, no monument has been erected. After 1989 it was unclear whether Dąbrowszczacy were furtherly entitled to veteran privileges; the issue generated political debates until they became pointless, as almost all IB combatants had passed away. Another question was about homage references, existent in public space. A state-run institution IPN declared Polish IB combatants in service of the Stalinist regime and related homage references subject to de-communisation legislation. However, efficiency of purges of public space differs depending upon local political configuration and occasionally there is heated public debate ensuing. Until today the role of Polish IB combatants remains a highly divisive topic; for some they are traitors and for some they are heroes. In Switzerland, public sympathy was high for the Republican cause, but the federal government banned all fundraising and recruiting activities a month after the start of the war as part of the country's long-standing policy of neutrality. Around 800 Swiss volunteers joined the International Brigades, among them a small number of women. Sixty percent of Swiss volunteers identified as communists, while the others included socialists, anarchists and antifascists. Some 170 Swiss volunteers were killed in the war. The survivors were tried by military courts upon their return to Switzerland for violating the criminal prohibition on foreign military service. The courts pronounced 420 sentences which ranged from around 2 weeks to 4 years in prison, and often also stripped the convicts of their political rights for the period of up to 5 years. In the Swiss society, traditionally highly appreciative of civic virtues, this translated to longtime stigmatization also after the penalty period expired. In the judgment of Swiss historian Mauro Cerutti, volunteers were punished more harshly in Switzerland than in any other democratic country. Motions to pardon the Swiss brigaders on the account that they fought for a just cause have been repeatedly introduced in the Swiss federal parliament. A first such proposal was defeated in 1939 on neutrality grounds. In 2002, Parliament again rejected a pardon of the Swiss war volunteers, with a majority arguing that they broke a law that remains in effect to this day. In March 2009, Parliament adopted the third bill of pardon, retroactively rehabilitating Swiss brigades, only a handful of whom were still alive. In 2000 there was a monument honoring Swiss IB combatants unveiled in Geneva; there are also numerous plaques mounted elsewhere, e.g., at the Volkshaus in Zürich. On disbandment, 305 British volunteers left Spain to return home. They arrived at Victoria Station in central London on 7 December and were met warmly as returning heroes by a crowd of supporters including Clement Attlee, Stafford Cripps, Willie Gallacher, Ellen Wilkinson and Will Lawther. The last surviving British member of the International Brigades, Geoffrey Servante, died in April 2019 aged 99. The International Brigade Memorial Trust is a registered charity that handles activities around the memory of volunteers from Britain and Ireland. The group maintains a map of memorials to volunteers in the Spanish Civil War and organises yearly events to commemorate the war. In the United States, the returned volunteers were labeled "premature anti-fascists" by the FBI, denied promotion during service in the U.S. military during World War II, and pursued by Congressional committees during the Red Scare of 1947–1957. However, threats of loss of citizenship were not carried out. Josep Almudéver, believed to be the last surviving veteran of the International Brigades, died on 23 May 2021 at the age of 101. Although born into a Spanish family and living in Spain at the outbreak of the conflict, he also held French citizenship and enlisted in the International Brigades to avoid age restrictions in the Spanish Republican army. He served in the CXXIX International Brigade and later fought in the Spanish Maquis, and after the war lived in exile in France. On 26 January 1996, the Spanish government gave Spanish citizenship to the 600 or so remaining Brigadistas, fulfilling a promise made by Prime Minister Juan Negrín in 1938. In 1996, Jacques Chirac, then French President, granted the former French members of the International Brigades the legal status of former service personnel ("ancient combatants") following the request of two French communist Members of Parliament, Lefort and Asensi, both children of volunteers. Before 1996, the same request was turned down several times including by François Mitterrand, the former Socialist President. The International Brigades were inheritors of a socialist aesthetic. The flags featured the colors of the Spanish Republic: red, yellow and purple, often along with socialist symbols (red flags, hammer and sickle, fist). The emblem of the brigades themselves was the three-pointed red star, which is often featured.
[ { "paragraph_id": 0, "text": "The International Brigades (Spanish: Brigadas Internacionales) were military units set up by the Communist International to assist the Popular Front government of the Second Spanish Republic during the Spanish Civil War. The organization existed for two years, from 1936 until 1938. It is estimated that during the entire war, between 40,000 and 59,000 members served in the International Brigades, including some 10,000 who died in combat. Beyond the Spanish Civil War, \"International Brigades\" is also sometimes used interchangeably with the term foreign legion in reference to military units comprising foreigners who volunteer to fight in the military of another state, often in times of war.", "title": "" }, { "paragraph_id": 1, "text": "The headquarters of the brigade was located at the Gran Hotel, Albacete, Castilla-La Mancha. They participated in the battles of Madrid, Jarama, Guadalajara, Brunete, Belchite, Teruel, Aragon and the Ebro. Most of these ended in defeat. For the last year of its existence, the International Brigades were integrated into the Spanish Republican Army as part of the Spanish Foreign Legion. The organisation was dissolved on 23 September 1938 by Spanish Prime Minister Juan Negrín in a vain attempt to get more support from the liberal democracies on the Non-Intervention Committee.", "title": "" }, { "paragraph_id": 2, "text": "The International Brigades were strongly supported by the Comintern and represented the Soviet Union's commitment to assisting the Spanish Republic (with arms, logistics, military advisers and the NKVD), just as Portugal, Fascist Italy, and Nazi Germany were assisting the opposing Nationalist insurgency. The largest number of volunteers came from France (where the French Communist Party had many members) and communist exiles from Italy and Germany. Many Jews were part of the brigades, being particularly numerous within the volunteers coming from the United States, Poland, France, England and Argentina.", "title": "" }, { "paragraph_id": 3, "text": "Republican volunteers who were opposed to Stalinism did not join the Brigades but instead enlisted in the separate Popular Front, the POUM (formed from Trotskyist, Bukharinist, and other anti-Stalinist groups, which did not separate Spaniards and foreign volunteers), or anarcho-syndicalist groups such as the Durruti Column, the IWA, and the CNT.", "title": "" }, { "paragraph_id": 4, "text": "Using foreign communist parties to recruit volunteers for Spain was first proposed in the Soviet Union in September 1936—apparently at the suggestion of Maurice Thorez—by Willi Münzenberg, chief of Comintern propaganda for Western Europe. As a security measure, non-communist volunteers would first be interviewed by an NKVD agent.", "title": "Formation and recruitment" }, { "paragraph_id": 5, "text": "By the end of September, the Italian and French Communist Parties had decided to set up a column. Luigi Longo, ex-leader of the Italian Communist Youth, was charged to make the necessary arrangements with the Spanish government. The Soviet Ministry of Defense also helped, since they had an experience of dealing with corps of international volunteers during the Russian Civil War. The idea was initially opposed by Largo Caballero, but after the first setbacks of the war, he changed his mind and finally agreed to the operation on 22 October. However, the Soviet Union did not withdraw from the Non-Intervention Committee, probably to avoid diplomatic conflict with France and the United Kingdom.", "title": "Formation and recruitment" }, { "paragraph_id": 6, "text": "The main recruitment center was in Paris, under the supervision of Soviet colonel Karol \"Walter\" Świerczewski. On 17 October 1936, an open letter by Joseph Stalin to José Díaz was published in Mundo Obrero, arguing that victory for the Spanish second republic was a matter not only for Spaniards but also for the whole of \"progressive humanity\"; in short order, communist activists joined with moderate socialist and liberal groups to form anti-fascist \"popular front\" militias in several countries, most of them under the control of or influenced by the Comintern.", "title": "Formation and recruitment" }, { "paragraph_id": 7, "text": "Entry to Spain was arranged for volunteers, for instance, a Yugoslav, Josip Broz, who would become famous as Marshal Tito, was in Paris to provide assistance, money, and passports for volunteers from Eastern Europe (including numerous Yugoslav volunteers in the Spanish Civil War). Volunteers were sent by train or ship from France to Spain, and sent to the base at Albacete. Many of them also went by themselves to Spain. The volunteers were under no contract, nor defined engagement period, which would later prove a problem.", "title": "Formation and recruitment" }, { "paragraph_id": 8, "text": "Also, many Italians, Germans, and people from other countries joined the movement, with the idea that combat in Spain was the first step to restore democracy or advance a revolutionary cause in their own country. There were also many unemployed workers (especially from France), and adventurers. Finally, some 500 communists who had been exiled to Russia were sent to Spain (among them, experienced military leaders from the First World War like \"Kléber\" Stern, \"Gomez\" Zaisser, \"Lukacs\" Zalka and \"Gal\" Galicz, who would prove invaluable in combat).", "title": "Formation and recruitment" }, { "paragraph_id": 9, "text": "The operation was met with enthusiasm by communists, but by anarchists with skepticism, at best. At first, the anarchists, who controlled the borders with France, were told to refuse communist volunteers, but reluctantly allowed their passage after protests. Keith Scott Watson, a journalist who fought alongside Esmond Romilly at Cerro de los Ángeles and who later “resigned” from the Thälmann Battalion, describes in his memoirs how he was detained and interrogated by Anarchist border guards before eventually being allowed into the country. A group of 500 volunteers (mainly French, with a few exiled Poles and Germans) arrived in Albacete on 14 October 1936. They were met by international volunteers who had already been fighting in Spain: Germans from the Thälmann Battalion, Italians from the Centuria Gastone Sozzi and French from the Commune de Paris Battalion. Among them was the poet John Cornford, who had travelled down through France and Spain with a group of fellow intellectuals and artists including John Sommerfield, Bernard Knox and Jan Kurzke, all of whom left detailed memoirs of their battle experiences.", "title": "Formation and recruitment" }, { "paragraph_id": 10, "text": "On 30 May 1937, the Spanish liner Ciudad de Barcelona, carrying 200–250 volunteers from Marseille to Spain, was torpedoed by a Nationalist submarine off the coast of Malgrat de Mar. The ship sunk and up to 65 volunteers are estimated to have drowned.", "title": "Formation and recruitment" }, { "paragraph_id": 11, "text": "Albacete soon became the International Brigades headquarters and its main depot. It was run by a troika of Comintern heavyweights: André Marty was commander; Luigi Longo (Gallo) was Inspector-General; and Giuseppe Di Vittorio (Nicoletti) was chief political commissar.", "title": "Formation and recruitment" }, { "paragraph_id": 12, "text": "There were many Jewish volunteers amongst the brigaders - about a quarter of the total. A Jewish company was formed within the Polish battalion that was named after Naftali Botwin, a young Jewish communist killed in Poland in 1925.", "title": "Formation and recruitment" }, { "paragraph_id": 13, "text": "The French Communist Party provided uniforms for the Brigades. They were organized into mixed brigades, the basic military unit of the Republican People's Army. Discipline was severe. For several weeks, the Brigades were locked in their base while their strict military training was underway.", "title": "Formation and recruitment" }, { "paragraph_id": 14, "text": "The Battle of Madrid was a major success for the Republic, and staved off the prospect of a rapid defeat at the hands of Francisco Franco's forces. The role of the International Brigades in this victory was generally recognized but was exaggerated by Comintern propaganda so that the outside world heard only of their victories and not those of Spanish units. So successful was such propaganda that the British Ambassador, Sir Henry Chilton, declared that there were no Spaniards in the army which had defended Madrid. The International Brigade forces that fought in Madrid arrived after another successful Republican fighting. Of the 40,000 Republican troops in the city, the foreign troops numbered less than 3,000.", "title": "Service" }, { "paragraph_id": 15, "text": "Even though the International Brigades did not win the battle by themselves, nor significantly change the situation, they certainly did provide an example by their determined fighting and improved the morale of the population by demonstrating the concern of other nations in the fight. Many of the older members of the International Brigades provided valuable combat experience, having fought during the First World War (Spain remained neutral in 1914–1918) and the Irish War of Independence (some had fought in the British Army while others had fought in the Irish Republican Army (IRA)).", "title": "Service" }, { "paragraph_id": 16, "text": "One of the strategic positions in Madrid was the Casa de Campo. There the Nationalist troops were Moroccans, commanded by General José Enrique Varela. They were stopped by III and IV Brigades of the Spanish Republican Army.", "title": "Service" }, { "paragraph_id": 17, "text": "On 9 November 1936, the XI International Brigade – comprising 1,900 men from the Edgar André Battalion, the Commune de Paris Battalion and the Dabrowski Battalion, together with a British machine-gun company — took up position at the Casa de Campo. In the evening, its commander, General Kléber, launched an assault on the Nationalist positions. This lasted for the whole night and part of the next morning. At the end of the fight, the Nationalist troops had been forced to retreat, abandoning all hopes of a direct assault on Madrid by Casa de Campo, while the XIth Brigade had lost a third of its personnel.", "title": "Service" }, { "paragraph_id": 18, "text": "On 13 November, the 1,550-man strong XII International Brigade, made up of the Thälmann Battalion, the Garibaldi Battalion and the André Marty Battalion, deployed. Commanded by General \"Lukacs\", they assaulted Nationalist positions on the high ground of Cerro de Los Angeles. As a result of language and communication problems, command issues, lack of rest, poor coordination with armored units, and insufficient artillery support, the attack failed.", "title": "Service" }, { "paragraph_id": 19, "text": "On 19 November, the anarchist militias were forced to retreat, and Nationalist troops — Moroccans and Spanish Foreign Legionnaires, covered by the Nazi Condor Legion — captured a foothold in the University City. The 11th Brigade was sent to drive the Nationalists out of the University City. The battle was extremely bloody, a mix of artillery and aerial bombardment, with bayonet and grenade fights, room by room. Anarchist leader Buenaventura Durruti was shot there on 19 November 1936 and died the next day. The battle in the university went on until three-quarters of the University City was under Nationalist control. Both sides then started setting up trenches and fortifications. It was then clear that any assault from either side would be far too costly; the Nationalist leaders had to renounce the idea of a direct assault on Madrid, and prepare for a siege of the capital.", "title": "Service" }, { "paragraph_id": 20, "text": "On 13 December 1936, 18,000 nationalist troops attempted an attack to close the encirclement of Madrid at Guadarrama — an engagement known as the Battle of the Corunna Road. The Republicans sent in a Soviet armored unit, under General Dmitry Pavlov, and both XI and XII International Brigades. Violent combat followed, and they stopped the Nationalist advance.", "title": "Service" }, { "paragraph_id": 21, "text": "An attack was then launched by the Republic on the Córdoba front. The battle ended in a form of stalemate; a communique was issued, saying: \"During the day the advance continued without the loss of any territory.\" Poets Ralph Winston Fox and John Cornford were killed. Eventually, the Nationalists advanced, taking the hydroelectric station at El Campo. André Marty accused the commander of the Marseillaise Battalion, Gaston Delasalle, of espionage and treason and had him executed. (It is doubtful that Delasalle would have been a spy for Francisco Franco; he was denounced by his second-in-command, André Heussler, who was subsequently executed for treason during World War II by the French Resistance.)", "title": "Service" }, { "paragraph_id": 22, "text": "Further Nationalist attempts after Christmas to encircle Madrid met with failure, but not without extremely violent combat. On 6 January 1937, the Thälmann Battalion arrived at Las Rozas and held its positions until it was destroyed as a fighting force. On 9 January, only 10 km had been lost to the Nationalists, when the XIII International Brigade and XIV International Brigade and the 1st British Company, arrived in Madrid. Violent Republican assaults were launched in an attempt to retake the land, with little success. On 15 January, trenches and fortifications were built by both sides, resulting in a stalemate.", "title": "Service" }, { "paragraph_id": 23, "text": "The Nationalists did not take Madrid until the very end of the war, in March 1939, when they marched in unopposed. There were some pockets of resistance during the subsequent months.", "title": "Service" }, { "paragraph_id": 24, "text": "On 6 February 1937, following the fall of Málaga, the nationalists launched an attack on the Madrid–Andalusia road, south of Madrid. The Nationalists quickly advanced on the little town of Ciempozuelos, held by the XV International Brigade. was composed of the British Battalion (British Commonwealth and Irish), the Dimitrov Battalion (miscellaneous Balkan nationalities), the Sixth February Battalion (Belgians and French), the Canadian Mackenzie-Papineau Battalion and the Abraham Lincoln Brigade. An independent 80-men-strong (mainly) Irish unit, known afterward as the Connolly Column, also fought. Battalions were rarely composed entirely of one nationality, rather they were, for the most part, a mix of many.", "title": "Service" }, { "paragraph_id": 25, "text": "On 11 February 1937, a Nationalist brigade launched a surprise attack on the André Marty Battalion (XIV International Brigade), killing its sentries silently and crossing the Jarama. The Garibaldi Battalion stopped the advance with heavy fire. At another point, the same tactic allowed the Nationalists to move their troops across the river. On 12 February, the British Battalion, XV International Brigade took the brunt of the attack, remaining under heavy fire for seven hours. The position became known as \"Suicide Hill\". At the end of the day, only 225 of the 600 members of the British battalion remained. One company was captured by ruse, when Nationalists advanced among their ranks singing The Internationale.", "title": "Service" }, { "paragraph_id": 26, "text": "On 17 February, the Republican Army counter attacked. On 23 and 27 February, the International Brigades were engaged, but with little success. The Lincoln Battalion was put under great pressure, with no artillery support. It suffered 120 killed and 175 wounded. Amongst the dead was the Irish poet Charles Donnelly and Leo Greene.", "title": "Service" }, { "paragraph_id": 27, "text": "There were heavy casualties on both sides, and although \"both claimed victory ... both suffered defeats\". The battle resulted in a stalemate, with both sides digging in and creating elaborate trench systems. On 22 February 1937, the League of Nations Non-Intervention Committee ban on foreign volunteers went into effect.", "title": "Service" }, { "paragraph_id": 28, "text": "After the failed assault on the Jarama, the Nationalists attempted another assault on Madrid, this time from the northeast. The objective was the town of Guadalajara, 50 km from Madrid. The whole Italian expeditionary corps — 35,000 men, with 80 battle tanks and 200 field artillery — was deployed, as Benito Mussolini wanted the victory to be credited to Italy. On 9 March 1937, the Italians made a breach in the Republican lines but did not properly exploit the advance. However, the rest of the Nationalist army was advancing, and the situation appeared critical for the Republicans. A formation drawn from the best available units of the Republican army, including the XI and XII International Brigades, was quickly assembled.", "title": "Service" }, { "paragraph_id": 29, "text": "At dawn on 10 March, the Nationalists closed in, and by noon, the Garibaldi Battalion counterattacked. Some confusion arose from the fact that the sides were not aware of each other's movements, and that both sides spoke Italian; this resulted in scouts from both sides exchanging information without realizing they were enemies. The Republican lines advanced and made contact with XI International Brigade. Nationalist tanks were shot at and infantry patrols came into action.", "title": "Service" }, { "paragraph_id": 30, "text": "On 11 March, the Nationalist army broke the front of the Republican army. The Thälmann Battalion suffered heavy losses but succeeded in holding the Trijueque–Torija road. The Garibaldi also held its positions. On 12 March, Republican planes and tanks attacked. The Thälmann Battalion attacked Trijuete in a bayonet charge and re-took the town, capturing numerous prisoners.", "title": "Service" }, { "paragraph_id": 31, "text": "The International Brigades also saw combat in the Battle of Teruel in January 1938. The 35th International Division suffered heavily in this battle from aerial bombardment as well as shortages of food, winter clothing, and ammunition. The XIV International Brigade fought in the Battle of Ebro in July 1938, the last Republican offensive of the war.", "title": "Service" }, { "paragraph_id": 32, "text": "Existing primary sources provide conflicting information as to the number of brigadiers killed; a report of the IB Albacete staff from late March 1938 claimed 4,575 KIA, an internal Soviet communication to Moscow by an NKVD major Semyon Gendin from late July 1938 claimed 3,615 KIA, while the prime minister Juan Negrín in his farewell address in Barcelona of October 28, 1938, mentioned 5,000 fallen.", "title": "Casualties" }, { "paragraph_id": 33, "text": "Also, in historiography there is no agreement as to fatal casualties. One exact figure offered is 9,934; it was calculated in the mid-1970s and is at times repeated until today. The highest estimate identified is 15,000 KIA. Many scholars prefer 10,000, also in recently published works. The popular Osprey series claims there were at least 7,800 killed. However, other authors provide estimates that point rather to the range from 6,100 to 6,500. In some non-scholarly publications the number is given as 4,900. The above figures include brigadiers killed in action, these who died of wounds later or those who were executed as POWs. They do not include brigadiers who were executed by their own side, the figure that some claim might have been 500; they also do not include victims of accidents (self-shooting, traffic, drownings etc) or these who perished due to health problems (illness, frostbiten, poisoning etc).", "title": "Casualties" }, { "paragraph_id": 34, "text": "The total number of casualties is given as 48,909. It includes killed, missing and wounded, though probably contains numerous duplicated cases, as one individual might have suffered wounds a few times.", "title": "Casualties" }, { "paragraph_id": 35, "text": "The ratio of KIA to all IB combatants as calculated by historians might differ even more as it depends not only on estimates as to the number of killed, but also on estimates as to the total number of volunteers. Some sources suggest the figure of 8.3%, some authors claim 15%, others opt for 16.7%, prefer 24.7% or endorse the ratio of 28.6%; a single author arrived even at 33%. In comparison, in shock units used by the Nationalists, though they were not entirely comparable, the ratio was 11.3% for the Carlist requetés and 14.6% for the Moroccan regulares. The overall percentage of killed in action in armies of both sides is estimated at some 7%.", "title": "Casualties" }, { "paragraph_id": 36, "text": "Estimates of KIA ratio for major national contingents differs enormously and often bear no reasonable relation to the overall KIA ratio, calculated for the Brigades. For the French (including French-speaking Belgians and Swiss) the figures range between 12% and 18%; for the Italians between 18% and 20%; for the British between 16% and 22%; for the Americans between 13% and 32%; for the Germans (including Austrians and German-speaking Swiss) between 22% and 40%; for the Yugoslavs between 35% and 50%, for the Canadians between 43% and 57%; for the Poles (including Ukrainians, Jews, Belarusians) between 30% and 62%. Among smaller contingents, the KIA ratio calculated appears to be 10% for the Cubans, 17% for the Czechoslovaks, 18% for the Austrians, 21% for the Balts (Estonians, Latvians, Lithuanians), 21-25% for the Swiss, 31% for the Finns, 13%-33% for the Greeks, 23-35% for the Swedes, 40% for the Danes, and 44% for the Norwegians.", "title": "Casualties" }, { "paragraph_id": 37, "text": "In October 1938, at the height of the Battle of the Ebro, the Non-Intervention Committee demanded the withdrawal of the International Brigades. The Republican government of Juan Negrín announced the decision in the League of Nations on 21 September 1938. The disbandment was part of an ill-advised effort to get the Nationalists' foreign backers to withdraw their troops and to persuade the Western democracies such as France and Britain to end their arms embargo on the Republic.", "title": "Disbandment" }, { "paragraph_id": 38, "text": "By this time there were about an estimated 10,000 foreign volunteers still serving in Spain for the Republican side, and about 50,000 foreign conscripts for the Nationalists (excluding another 30,000 Moroccans). Perhaps half of the International Brigadistas were exiles or refugees from Nazi Germany, Fascist Italy or other countries, such as Hungary, which had authoritarian right-wing governments at the time. These men could not safely return home, and some were instead given honorary Spanish citizenship and integrated into Spanish units of the Popular Army. The remainder were repatriated to their own countries. The Belgian and Dutch volunteers lost their citizenship because they had served in a foreign army.", "title": "Disbandment" }, { "paragraph_id": 39, "text": "The first brigades were composed mostly of French, Belgian, Italian, and German volunteers, backed by a sizeable contingent of Polish miners from Northern France and Belgium. The XIth, XIIth and XIIIth were the first brigades formed. Later, the XIVth and XVth Brigades were raised, mixing experienced soldiers with new volunteers. Smaller Brigades — the 86th, 129th and 150th - were formed in late 1937 and 1938, mostly for temporary tactical reasons.", "title": "Composition" }, { "paragraph_id": 40, "text": "About 32,000 foreigners volunteered to defend the Spanish Republic, the vast majority of them with the International Brigades. Many were veterans of World War I. Their early engagements in 1936 during the Siege of Madrid amply demonstrated their military and propaganda value.", "title": "Composition" }, { "paragraph_id": 41, "text": "The international volunteers were mainly socialists, communists, or others willing to accept communist authority, and a high proportion were Jewish. Some were involved in the Barcelona May Days fighting against leftist opponents of the Communists: the Workers' Party of Marxist Unification (POUM) (Partido Obrero de Unificación Marxista, an anti-Stalinist Marxist party) and the anarchist CNT (CNT, Confederación Nacional del Trabajo) and FAI (FAI, Iberian Anarchist Federation), who had strong support in Catalonia. These libertarian groups attracted fewer foreign volunteers.", "title": "Composition" }, { "paragraph_id": 42, "text": "To simplify communication, the battalions usually concentrated on people of the same nationality or language group. The battalions were often (formally, at least) named after inspirational people or events. From spring 1937 onwards, many battalions contained one Spanish volunteer company of about 150 men.", "title": "Composition" }, { "paragraph_id": 43, "text": "Later in the war, military discipline tightened and learning Spanish became mandatory. By decree of 23 September 1937, the International Brigades formally became units of the Spanish Foreign Legion. This made them subject to the Spanish Code of Military Justice. However, the Spanish Foreign Legion itself sided with the Nationalists throughout the coup and the civil war. The same decree also specified that non-Spanish officers in the Brigades should not exceed Spanish ones by more than 50 percent.", "title": "Composition" }, { "paragraph_id": 44, "text": "After the Civil War was eventually won by the Nationalists, the brigaders were initially on the \"wrong side\" of history, especially as most of their home countries had right-wing governments (in France, for instance, the Popular Front was not in power anymore).", "title": "Status after the war" }, { "paragraph_id": 45, "text": "However, since most of these countries soon found themselves at war with the very powers which had been supporting the Nationalists, the brigadistas gained some prestige as the first guard of the democracies, as having foreseen the danger of fascism and gone to fight it. Retrospectively, it was clear that the war in Spain was as much a precursor of the Second World War as a Spanish civil war.", "title": "Status after the war" }, { "paragraph_id": 46, "text": "Some glory therefore accrued to the volunteers (a great many of the survivors also fought during World War II), but this soon faded in the fear that it would promote communism by association.", "title": "Status after the war" }, { "paragraph_id": 47, "text": "An exception is among some left-wingers, for example many anarchists. Among these, the Brigades, or at least their leadership, are criticized for their role in suppressing the Spanish Revolution. An example of a modern work that promotes this view is Ken Loach's film Land and Freedom. A well-known contemporary account of the Spanish Civil War which also takes this view is George Orwell's book Homage to Catalonia.", "title": "Status after the war" }, { "paragraph_id": 48, "text": "Germany was undivided until after the Second World War. At that time, the new communist state, the German Democratic Republic, began to create a national identity which was separate from and antithetical to the former Nazi Germany. The Spanish Civil War, and especially the role of the International Brigades, became a substantial part of East Germany's memorial rituals because of the substantial numbers of German communists who had served in the brigades. These showcased a commitment by many Germans to antifascism at a time when Germany and Nazism were often conflated.", "title": "Status after the war" }, { "paragraph_id": 49, "text": "Survivors of the Mackenzie-Papineau Battalion were often investigated by the Royal Canadian Mounted Police and denied employment when they returned to Canada. Some were prevented from serving in the military during the Second World War due to \"political unreliability\".", "title": "Status after the war" }, { "paragraph_id": 50, "text": "In 1995 a monument to veterans of the war was built near Ontario's provincial parliament. On 12 February 2000, a bronze statue \"The Spirit of the Republic\" by sculptor Jack Harman, based on an original poster from the Spanish Republic, was placed on the grounds of the British Columbia Legislature. In 2001, the few remaining Canadian veterans of the Spanish Civil War dedicated a monument to Canadian members of the International Brigades in Ottawa's Green Island Park.", "title": "Status after the war" }, { "paragraph_id": 51, "text": "In line with the 1920 legislation, Polish citizens who volunteered to the IB were automatically stripped of citizenship as individuals who without formal approval served in foreign armed forces. Following republican defeat the combatants recruited in France and Belgium returned there. Among the others some served in pro-Communist partisan units in the German-occupied Poland and some made it to the USSR and served in the pro-Communist Polish army raised there.", "title": "Status after the war" }, { "paragraph_id": 52, "text": "In the Communist Poland the IB combatants – referred to as \"Dąbrowszczacy\" - were granted veteran rights, but their fate differed depending upon political circumstances. Following some early exaltation in 1945-1949 they were later approached somewhat cautiously. There were cases of assuming high positions in administration and especially in security, but there were also cases of deposition, arrest and prison on trumped-up charges of political conspiracy; these were released in the mid-1950s.", "title": "Status after the war" }, { "paragraph_id": 53, "text": "Though from the onset Polish engagement in IB was hailed as \"working class taking to arms against Fascism\", the most intense idolization took place between the mid-1950s and the mid-1960s, with a spate of publications, schools and streets named after \"Dąbrowszczacy\". However, an antisemitic turn in the late 1960s again produced de-emphasizing of IB volunteers, many of whom left Poland. Until the end of Communist rule the IB episode was duly acknowledged, but propaganda related was a far cry from veneration reserved for wartime Communist partisans or the USSR-raised Polish army. Despite some efforts on part of IB combatants, no monument has been erected.", "title": "Status after the war" }, { "paragraph_id": 54, "text": "After 1989 it was unclear whether Dąbrowszczacy were furtherly entitled to veteran privileges; the issue generated political debates until they became pointless, as almost all IB combatants had passed away. Another question was about homage references, existent in public space. A state-run institution IPN declared Polish IB combatants in service of the Stalinist regime and related homage references subject to de-communisation legislation. However, efficiency of purges of public space differs depending upon local political configuration and occasionally there is heated public debate ensuing. Until today the role of Polish IB combatants remains a highly divisive topic; for some they are traitors and for some they are heroes.", "title": "Status after the war" }, { "paragraph_id": 55, "text": "In Switzerland, public sympathy was high for the Republican cause, but the federal government banned all fundraising and recruiting activities a month after the start of the war as part of the country's long-standing policy of neutrality. Around 800 Swiss volunteers joined the International Brigades, among them a small number of women. Sixty percent of Swiss volunteers identified as communists, while the others included socialists, anarchists and antifascists.", "title": "Status after the war" }, { "paragraph_id": 56, "text": "Some 170 Swiss volunteers were killed in the war. The survivors were tried by military courts upon their return to Switzerland for violating the criminal prohibition on foreign military service. The courts pronounced 420 sentences which ranged from around 2 weeks to 4 years in prison, and often also stripped the convicts of their political rights for the period of up to 5 years. In the Swiss society, traditionally highly appreciative of civic virtues, this translated to longtime stigmatization also after the penalty period expired. In the judgment of Swiss historian Mauro Cerutti, volunteers were punished more harshly in Switzerland than in any other democratic country.", "title": "Status after the war" }, { "paragraph_id": 57, "text": "Motions to pardon the Swiss brigaders on the account that they fought for a just cause have been repeatedly introduced in the Swiss federal parliament. A first such proposal was defeated in 1939 on neutrality grounds. In 2002, Parliament again rejected a pardon of the Swiss war volunteers, with a majority arguing that they broke a law that remains in effect to this day. In March 2009, Parliament adopted the third bill of pardon, retroactively rehabilitating Swiss brigades, only a handful of whom were still alive. In 2000 there was a monument honoring Swiss IB combatants unveiled in Geneva; there are also numerous plaques mounted elsewhere, e.g., at the Volkshaus in Zürich.", "title": "Status after the war" }, { "paragraph_id": 58, "text": "On disbandment, 305 British volunteers left Spain to return home. They arrived at Victoria Station in central London on 7 December and were met warmly as returning heroes by a crowd of supporters including Clement Attlee, Stafford Cripps, Willie Gallacher, Ellen Wilkinson and Will Lawther.", "title": "Status after the war" }, { "paragraph_id": 59, "text": "The last surviving British member of the International Brigades, Geoffrey Servante, died in April 2019 aged 99.", "title": "Status after the war" }, { "paragraph_id": 60, "text": "The International Brigade Memorial Trust is a registered charity that handles activities around the memory of volunteers from Britain and Ireland. The group maintains a map of memorials to volunteers in the Spanish Civil War and organises yearly events to commemorate the war.", "title": "Status after the war" }, { "paragraph_id": 61, "text": "In the United States, the returned volunteers were labeled \"premature anti-fascists\" by the FBI, denied promotion during service in the U.S. military during World War II, and pursued by Congressional committees during the Red Scare of 1947–1957. However, threats of loss of citizenship were not carried out.", "title": "Status after the war" }, { "paragraph_id": 62, "text": "Josep Almudéver, believed to be the last surviving veteran of the International Brigades, died on 23 May 2021 at the age of 101. Although born into a Spanish family and living in Spain at the outbreak of the conflict, he also held French citizenship and enlisted in the International Brigades to avoid age restrictions in the Spanish Republican army. He served in the CXXIX International Brigade and later fought in the Spanish Maquis, and after the war lived in exile in France.", "title": "Recognition" }, { "paragraph_id": 63, "text": "On 26 January 1996, the Spanish government gave Spanish citizenship to the 600 or so remaining Brigadistas, fulfilling a promise made by Prime Minister Juan Negrín in 1938.", "title": "Recognition" }, { "paragraph_id": 64, "text": "In 1996, Jacques Chirac, then French President, granted the former French members of the International Brigades the legal status of former service personnel (\"ancient combatants\") following the request of two French communist Members of Parliament, Lefort and Asensi, both children of volunteers. Before 1996, the same request was turned down several times including by François Mitterrand, the former Socialist President.", "title": "Recognition" }, { "paragraph_id": 65, "text": "The International Brigades were inheritors of a socialist aesthetic. The flags featured the colors of the Spanish Republic: red, yellow and purple, often along with socialist symbols (red flags, hammer and sickle, fist). The emblem of the brigades themselves was the three-pointed red star, which is often featured.", "title": "Symbolism and heraldry" } ]
The International Brigades were military units set up by the Communist International to assist the Popular Front government of the Second Spanish Republic during the Spanish Civil War. The organization existed for two years, from 1936 until 1938. It is estimated that during the entire war, between 40,000 and 59,000 members served in the International Brigades, including some 10,000 who died in combat. Beyond the Spanish Civil War, "International Brigades" is also sometimes used interchangeably with the term foreign legion in reference to military units comprising foreigners who volunteer to fight in the military of another state, often in times of war. The headquarters of the brigade was located at the Gran Hotel, Albacete, Castilla-La Mancha. They participated in the battles of Madrid, Jarama, Guadalajara, Brunete, Belchite, Teruel, Aragon and the Ebro. Most of these ended in defeat. For the last year of its existence, the International Brigades were integrated into the Spanish Republican Army as part of the Spanish Foreign Legion. The organisation was dissolved on 23 September 1938 by Spanish Prime Minister Juan Negrín in a vain attempt to get more support from the liberal democracies on the Non-Intervention Committee. The International Brigades were strongly supported by the Comintern and represented the Soviet Union's commitment to assisting the Spanish Republic, just as Portugal, Fascist Italy, and Nazi Germany were assisting the opposing Nationalist insurgency. The largest number of volunteers came from France and communist exiles from Italy and Germany. Many Jews were part of the brigades, being particularly numerous within the volunteers coming from the United States, Poland, France, England and Argentina. Republican volunteers who were opposed to Stalinism did not join the Brigades but instead enlisted in the separate Popular Front, the POUM, or anarcho-syndicalist groups such as the Durruti Column, the IWA, and the CNT.
2001-12-14T11:13:25Z
2023-12-30T23:27:11Z
[ "Template:Anti-fascism sidebar", "Template:Cite web", "Template:Webarchive", "Template:Mixed brigades of Spain", "Template:Moresources", "Template:Dead link", "Template:XII International Brigade", "Template:Cite book", "Template:Refend", "Template:Use dmy dates", "Template:Main", "Template:Flag", "Template:See also", "Template:Infobox military unit", "Template:Cite swiss law", "Template:Authority control", "Template:Lang-es", "Template:Cite journal", "Template:Cite news", "Template:XV International Brigade", "Template:Citation needed", "Template:In lang", "Template:Commons category", "Template:Spanish Civil War", "Template:Flagdeco", "Template:Reflist", "Template:Harvnb", "Template:Citation", "Template:ISBN", "Template:Refbegin", "Template:Short description", "Template:For", "Template:Vanchor", "Template:CSS image crop" ]
https://en.wikipedia.org/wiki/International_Brigades
15,373
Iron Duke
Iron Duke may refer to:
[ { "paragraph_id": 0, "text": "Iron Duke may refer to:", "title": "" } ]
Iron Duke may refer to:
2023-01-29T21:44:02Z
[ "Template:TOCright", "Template:HMS", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Iron_Duke
15,374
Food irradiation
Food irradiation (sometimes radurization or radurisation) is the process of exposing food and food packaging to ionizing radiation, such as from gamma rays, x-rays, or electron beams. Food irradiation improves food safety and extends product shelf life (preservation) by effectively destroying organisms responsible for spoilage and foodborne illness, inhibits sprouting or ripening, and is a means of controlling insects and invasive pests. In the US, consumer perception of foods treated with irradiation is more negative than those processed by other means. The U.S. Food and Drug Administration (FDA), the World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC), and U.S. Department of Agriculture (USDA) have performed studies that confirm irradiation to be safe. In order for a food to be irradiated in the US, the FDA will still require that the specific food be thoroughly tested for irradiation safety. Food irradiation is permitted in over 60 countries, and about 500,000 metric tons of food are processed annually worldwide. The regulations for how food is to be irradiated, as well as the foods allowed to be irradiated, vary greatly from country to country. In Austria, Germany, and many other countries of the European Union only dried herbs, spices, and seasonings can be processed with irradiation and only at a specific dose, while in Brazil all foods are allowed at any dose. Irradiation is used to reduce or eliminate pests and the risk of food-borne illnesses as well as prevent or slow spoilage and plant maturation or sprouting. Depending on the dose, some or all of the organisms, microorganisms, bacteria, and viruses present are destroyed, slowed, or rendered incapable of reproduction. When targeting bacteria, most foods are irradiated to significantly reduce the number of active microbes, not to sterilize all microbes in the product. Irradiation cannot return spoiled or over-ripe food to a fresh state. If this food was processed by irradiation, further spoilage would cease and ripening would slow, yet the irradiation would not destroy the toxins or repair the texture, color, or taste of the food. Irradiation slows the speed at which enzymes change the food. By reducing or removing spoilage organisms and slowing ripening and sprouting (e.g. potato, onion, and garlic) irradiation is used to reduce the amount of food that goes bad between harvest and final use. Shelf-stable products are created by irradiating foods in sealed packages, as irradiation reduces chance of spoilage, the packaging prevents re-contamination of the final product. Foods that can tolerate the higher doses of radiation required to do so can be sterilized. This is useful for people at high risk of infection in hospitals as well as situations where proper food storage is not feasible, such as rations for astronauts. Pests such as insects have been transported to new habitats through the trade in fresh produce and significantly affected agricultural production and the environment once they established themselves. To reduce this threat and enable trade across quarantine boundaries, food is irradiated using a technique called phytosanitary irradiation. Phytosanitary irradiation sterilizes the pests preventing breeding by treating the produce with low doses of irradiation (less than 1000 Gy). The higher doses required to destroy pests are not used due to either affecting the look or taste, or cannot be tolerated by fresh produce. The target material is exposed to a radiation source that is separated from the target material. The radiation source supplies energetic particles or waves. As these waves/particles enter the target material they collide with other particles. The higher the likelihood of these collisions over a distance are, the lower the penetration depth of the irradiation process is as the energy is more quickly depleted. Around the sites of these collisions chemical bonds are broken, creating short lived radicals (e.g. the hydroxyl radical, the hydrogen atom and solvated electrons). These radicals cause further chemical changes by bonding with and or stripping particles from nearby molecules. When collisions occur in cells, cell division is often suppressed, halting or slowing the processes that cause the food to mature. When the process damages DNA or RNA, effective reproduction becomes unlikely halting the population growth of viruses and organisms. The distribution of the dose of radiation varies from the food surface and the interior as it is absorbed as it moves through food and depends on the energy and density of the food and the type of radiation used. Irradiation leaves a product with qualities (sensory and chemical) that are more similar to unprocessed food than any preservation method that can achieve a similar degree of preservation. Irradiated food does not become radioactive; only power levels that are incapable of causing significant induced radioactivity are used for food irradiation. In the United States this limit is deemed to be 4 mega electron volts for electron beams and x-ray sources – cobalt-60 or caesium-137 sources are never energetic enough to be of concern. Particles below this energy can never be strong enough to modify the nucleus of the targeted atom in the food, regardless of how many particles hit the target material, and so radioactivity can not be induced. The radiation absorbed dose is the amount energy absorbed per unit weight of the target material. Dose is used because, when the same substance is given the same dose, similar changes are observed in the target material(Gy or J/kg). Dosimeters are used to measure dose, and are small components that, when exposed to ionizing radiation, change measurable physical attributes to a degree that can be correlated to the dose received. Measuring dose (dosimetry) involves exposing one or more dosimeters along with the target material. For purposes of legislation doses are divided into low (up to 1 kGy), medium (1 kGy to 10 kGy), and high-dose applications (above 10 kGy). High-dose applications are above those currently permitted in the US for commercial food items by the FDA and other regulators around the world, though these doses are approved for non commercial applications, such as sterilizing frozen meat for NASA astronauts (doses of 44 kGy) and food for hospital patients. The ratio of the maximum dose permitted at the outer edge (Dmax) to the minimum limit to achieve processing conditions (Dmin) determines the uniformity of dose distribution. This ratio determines how uniform the irradiation process is. As ionising radiation passes through food, it creates a trail of chemical transformations due to radiolysis effects. Irradiation does not make foods radioactive, change food chemistry, compromise nutrient contents, or change the taste, texture, or appearance of food. Assessed rigorously over several decades, irradiation in commercial amounts to treat food has no negative impact on the sensory qualities and nutrient content of foods. Watercress (Nasturtium officinale) is a rapidly growing aquatic or semi aquatic perennial plant. Because chemical agents do not provide efficient microbial reductions, watercress has been tested with gamma irradiation treatment in order to improve both safety and the shelf life of the product. It is traditionally used on horticultural products to prevent sprouting and post-packaging contamination, delay post-harvest ripening, maturation and senescence. Some who advocate against food irradiation argue the long-term health effects and safety of irradiated food cannot be scientifically proven, however there have been hundreds of animal feeding studies of irradiated food performed since 1950 Endpoints include subchronic and chronic changes in metabolism, histopathology, function of most organs, reproductive effects, growth, teratogenicity, and mutagenicity. Up to the point where the food is processed by irradiation, the food is processed in the same way as all other food. For some forms of treatment, packaging is used to ensure the food stuffs never come in contact with radioactive substances and prevent re-contamination of the final product. Food processors and manufacturers today struggle with using affordable, efficient packaging materials for irradiation based processing. The implementation of irradiation on prepackaged foods has been found to impact foods by inducing specific chemical alterations to the food packaging material that migrates into the food. Cross-linking in various plastics can lead to physical and chemical modifications that can increase the overall molecular weight. On the other hand, chain scission is fragmentation of polymer chains that leads to a molecular weight reduction. To treat the food, it is exposed to a radioactive source for a set period of time to achieve a desired dose. Radiation may be emitted by a radioactive substance, or by X-ray and electron beam accelerators. Special precautions are taken to ensure the food stuffs never come in contact with the radioactive substances and that the personnel and the environment are protected from exposure radiation. Irradiation treatments are typically classified by dose (high, medium, and low), but are sometimes classified by the effects of the treatment (radappertization, radicidation and radurization). Food irradiation is sometimes referred to as "cold pasteurization" or "electronic pasteurization" because ionizing the food does not heat the food to high temperatures during the process, and the effect is similar to heat pasteurization. The term "cold pasteurization" is controversial because the term may be used to disguise the fact the food has been irradiated and pasteurization and irradiation are fundamentally different processes. Gamma irradiation is produced from the radioisotopes cobalt-60 and caesium-137, which are produced by neutron irradiation of cobalt-59 (the only stable isotope of cobalt) and as a nuclear fission product, respectively. Cobalt-60 is the most common source of gamma rays for food irradiation in commercial scale facilities as it is water insoluble and hence has little risk of environmental contamination by leakage into the water systems. As for transportation of the radiation source, cobalt-60 is transported in special trucks that prevent release of radiation and meet standards mentioned in the Regulations for Safe Transport of Radioactive Materials of the International Atomic Energy Act. The special trucks must meet high safety standards and pass extensive tests to be approved to ship radiation sources. Conversely, caesium-137, is water-soluble and poses a risk of environmental contamination. Insufficient quantities are available for large scale commercial use as the vast majority of Caesium-137 produced in nuclear reactors is not extracted from spent nuclear fuel. An incident where water-soluble caesium-137 leaked into the source storage pool requiring NRC intervention has led to near elimination of this radioisotope. Gamma irradiation is widely used due to its high penetration depth and dose uniformity, allowing for large-scale applications with high throughput. Additionally, gamma irradiation is significantly less expensive than using an X-ray source. In most designs, the radioisotope, contained in stainless steel pencils, is stored in a water-filled storage pool which absorbs the radiation energy when not in use. For treatment, the source is lifted out of the storage tank, and product contained in totes is passed around the pencils to achieve required processing. Treatment costs vary as a function of dose and facility usage. A pallet or tote is typically exposed for several minutes to hours depending on dose. Low-dose applications such as disinfestation of fruit range between US$0.01/lbs and US$0.08/lbs while higher-dose applications can cost as much as US$0.20/lbs. Treatment of electron beams is created as a result of high energy electrons in an accelerator that generates electrons accelerated to 99% the speed of light. This system uses electrical energy and can be powered on and off. The high power correlates with a higher throughput and lower unit cost, but electron beams have low dose uniformity and a penetration depth of centimeters. Therefore, electron beam treatment works for products that have low thickness. X-rays are produced by bombardment of dense target material with high-energy accelerated electrons (this process is known as bremsstrahlung-conversion), giving rise to a continuous energy spectrum. Heavy metals, such as tantalum and tungsten, are used because of their high atomic numbers and high melting temperatures. Tantalum is usually preferred versus tungsten for industrial, large-area, high-power targets because it is more workable than tungsten and has a higher threshold energy for induced reactions. Like electron beams, x-rays do not require the use of radioactive materials and can be turned off when not in use. X-rays have high penetration depths and high dose uniformity but they are a very expensive source of irradiation as only 8% of the incident energy is converted into X-rays. UV-C does not penetrate as deeply as other methods. As such, its direct antimicrobial effect is limited to the surface only. Its DNA damage effect produces cyclobutane-type pyrimidine dimers. Besides the direct effects, UV-C also induces resistance even against pathogens not yet inoculated. Some of this induced resistance is understood, being the result of temporary inactivation of self-degradation enzymes like polygalacturonase and increased expression of enzymes associated with cell wall repair. Irradiation is a capital-intensive technology requiring a substantial initial investment, ranging from $1 million to $5 million. In the case of large research or contract irradiation facilities, major capital costs include a radiation source, hardware (irradiator, totes and conveyors, control systems, and other auxiliary equipment), land (1 to 1.5 acres), radiation shield, and warehouse. Operating costs include salaries (for fixed and variable labor), utilities, maintenance, taxes/insurance, cobalt-60 replenishment, general utilities, and miscellaneous operating costs. Perishable food items, like fruits, vegetables and meats would still require to be handled in the cold chain, so all other supply chain costs remain the same. Food manufacturers have not embraced food irradiation because the market does not support the increased price of irradiated foods, and because of potential consumer backlash due to irradiated foods. The cost of food irradiation is influenced by dose requirements, the food's tolerance of radiation, handling conditions, i.e., packaging and stacking requirements, construction costs, financing arrangements, and other variables particular to the situation. Irradiation has been approved by many countries. For example, in the U.S. and Canada, food irradiation has existed for decades. Food irradiation is used commercially and volumes are in general increasing at a slow rate, even in the European Union where all member countries allow the irradiation of dried herbs spices and vegetable seasonings, but only a few allow other foods to be sold as irradiated. Although there are some consumers who choose not to purchase irradiated food, a sufficient market has existed for retailers to have continuously stocked irradiated products for years. When labeled irradiated food is offered for retail sale, consumers buy it and re-purchase it, indicating a market for irradiated foods, although there is a continuing need for consumer education. Food scientists have concluded that any fresh or frozen food undergoing irradiation at specified doses is safe to consume, with some 60 countries using irradiation to maintain quality in their food supply. The following risks can be mentioned: The Codex Alimentarius represents the global standard for irradiation of food, in particular under the WTO-agreement. Regardless of treatment source, all processing facilities must adhere to safety standards set by the International Atomic Energy Agency (IAEA), Codex Code of Practice for the Radiation Processing of Food, Nuclear Regulatory Commission (NRC), and the International Organization for Standardization (ISO). More specifically, ISO 14470 and ISO 9001 provide in-depth information regarding safety in irradiation facilities. All commercial irradiation facilities contain safety systems which are designed to prevent exposure of personnel to radiation. The radiation source is constantly shielded by water, concrete, or metal. Irradiation facilities are designed with overlapping layers of protection, interlocks, and safeguards to prevent accidental radiation exposure. Meltdowns are unlikely to occur due to low heat production from sources used. The provisions of the Codex Alimentarius are that any "first generation" product must be labeled "irradiated" as any product derived directly from an irradiated raw material; for ingredients the provision is that even the last molecule of an irradiated ingredient must be listed with the ingredients even in cases where the unirradiated ingredient does not appear on the label. The RADURA-logo is optional; several countries use a graphical version that differs from the Codex-version. The suggested rules for labeling is published at CODEX-STAN – 1 (2005), and includes the usage of the Radura symbol for all products that contain irradiated foods. The Radura symbol is not a designator of quality. The amount of pathogens remaining is based upon dose and the original content and the dose applied can vary on a product by product basis. The European Union follows the Codex's provision to label irradiated ingredients down to the last molecule of irradiated food. The European Union does not provide for the use of the Radura logo and relies exclusively on labeling by the appropriate phrases in the respective languages of the Member States. The European Union enforces its irradiation labeling laws by requiring its member countries to perform tests on a cross section of food items in the market-place and to report to the European Commission. The results are published annually on EUR-Lex. The US defines irradiated foods as foods in which the irradiation causes a material change in the food, or a material change in the consequences that may result from the use of the food. Therefore, food that is processed as an ingredient by a restaurant or food processor is exempt from the labeling requirement in the US. All irradiated foods must include a prominent Radura symbol followed in addition to the statement "treated with irradiation" or "treated by irradiation. Bulk foods must be individually labeled with the symbol and statement or, alternatively, the Radura and statement should be located next to the sale container. Under section 409 of the Federal Food, Drug, and Cosmetic Act, irradiation of prepackaged foods requires premarket approval for not only the irradiation source for a specific food but also for the food packaging material. Approved packaging materials include various plastic films, yet does not cover a variety of polymers and adhesive based materials that have been found to meet specific standards. The lack of packaging material approval limits manufacturers production and expansion of irradiated prepackaged foods. Approved materials by FDA for Irradiation according to 21 CFR 179.45: In 2003, the Codex Alimentarius removed any upper dose limit for food irradiation as well as clearances for specific foods, declaring that all are safe to irradiate. Countries such as Pakistan and Brazil have adopted the Codex without any reservation or restriction. Standards that describe calibration and operation for radiation dosimetry, as well as procedures to relate the measured dose to the effects achieved and to report and document such results, are maintained by the American Society for Testing and Materials (ASTM international) and are also available as ISO/ASTM standards. All of the rules involved in processing food are applied to all foods before they are irradiated. The U.S. Food and Drug Administration (FDA) is the agency responsible for regulation of radiation sources in the United States. Irradiation, as defined by the FDA is a "food additive" as opposed to a food process and therefore falls under the food additive regulations. Each food approved for irradiation has specific guidelines in terms of minimum and maximum dosage as determined safe by the FDA. Packaging materials containing the food processed by irradiation must also undergo approval. The United States Department of Agriculture (USDA) amends these rules for use with meat, poultry, and fresh fruit. The United States Department of Agriculture (USDA) has approved the use of low-level irradiation as an alternative treatment to pesticides for fruits and vegetables that are considered hosts to a number of insect pests, including fruit flies and seed weevils. Under bilateral agreements that allows less-developed countries to earn income through food exports agreements are made to allow them to irradiate fruits and vegetables at low doses to kill insects, so that the food can avoid quarantine. The U.S. Food and Drug Administration and the U.S. Department of Agriculture have approved irradiation of the following foods and purposes: European law stipulates that all member countries must allow the sale of irradiated dried aromatic herbs, spices and vegetable seasonings. However, these Directives allow Member States to maintain previous clearances food categories the EC's Scientific Committee on Food (SCF) had previously approved (the approval body is now the European Food Safety Authority). Presently, Belgium, Czech Republic, France, Italy, Netherlands, and Poland allow the sale of many different types of irradiated foods. Before individual items in an approved class can be added to the approved list, studies into the toxicology of each of such food and for each of the proposed dose ranges are requested. It also states that irradiation shall not be used "as a substitute for hygiene or health practices or good manufacturing or agricultural practice". These Directives only control food irradiation for food retail and their conditions and controls are not applicable to the irradiation of food for patients requiring sterile diets. In 2021 the most common food items irradiated were frog legs at 65.1%, poultry 20.6% and dried aromatic herbs, spices and vegetables seasoning. Due to the European Single Market, any food, even if irradiated, must be allowed to be marketed in any other member state even if a general ban of food irradiation prevails, under the condition that the food has been irradiated legally in the state of origin. Furthermore, imports into the EC are possible from third countries if the irradiation facility had been inspected and approved by the EC and the treatment is legal within the EC or some Member state. In Australia, following cat deaths after irradiated cat food consumption and producer's voluntary recall, cat food irradiation was banned. Interlocks and safeguards are mandated to minimize this risk. There have been radiation-related accidents, deaths, and injury at such facilities, many of them caused by operators overriding the safety related interlocks. In a radiation processing facility, radiation specific concerns are supervised by special authorities, while "Ordinary" occupational safety regulations are handled much like other businesses. The safety of irradiation facilities is regulated by the United Nations International Atomic Energy Agency and monitored by the different national Nuclear Regulatory Commissions. The regulators enforce a safety culture that mandates that all incidents that occur are documented and thoroughly analyzed to determine the cause and improvement potential. Such incidents are studied by personnel at multiple facilities, and improvements are mandated to retrofit existing facilities and future design. In the US the Nuclear Regulatory Commission (NRC) regulates the safety of the processing facility, and the United States Department of Transportation (DOT) regulates the safe transport of the radioactive sources. The word "radurization" is derived from radura, combining the initial letters of the word "radiation" with the stem of "durus", the Latin word for hard, lasting.
[ { "paragraph_id": 0, "text": "Food irradiation (sometimes radurization or radurisation) is the process of exposing food and food packaging to ionizing radiation, such as from gamma rays, x-rays, or electron beams. Food irradiation improves food safety and extends product shelf life (preservation) by effectively destroying organisms responsible for spoilage and foodborne illness, inhibits sprouting or ripening, and is a means of controlling insects and invasive pests.", "title": "" }, { "paragraph_id": 1, "text": "In the US, consumer perception of foods treated with irradiation is more negative than those processed by other means. The U.S. Food and Drug Administration (FDA), the World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC), and U.S. Department of Agriculture (USDA) have performed studies that confirm irradiation to be safe. In order for a food to be irradiated in the US, the FDA will still require that the specific food be thoroughly tested for irradiation safety.", "title": "" }, { "paragraph_id": 2, "text": "Food irradiation is permitted in over 60 countries, and about 500,000 metric tons of food are processed annually worldwide. The regulations for how food is to be irradiated, as well as the foods allowed to be irradiated, vary greatly from country to country. In Austria, Germany, and many other countries of the European Union only dried herbs, spices, and seasonings can be processed with irradiation and only at a specific dose, while in Brazil all foods are allowed at any dose.", "title": "" }, { "paragraph_id": 3, "text": "Irradiation is used to reduce or eliminate pests and the risk of food-borne illnesses as well as prevent or slow spoilage and plant maturation or sprouting. Depending on the dose, some or all of the organisms, microorganisms, bacteria, and viruses present are destroyed, slowed, or rendered incapable of reproduction. When targeting bacteria, most foods are irradiated to significantly reduce the number of active microbes, not to sterilize all microbes in the product. Irradiation cannot return spoiled or over-ripe food to a fresh state. If this food was processed by irradiation, further spoilage would cease and ripening would slow, yet the irradiation would not destroy the toxins or repair the texture, color, or taste of the food.", "title": "Uses" }, { "paragraph_id": 4, "text": "Irradiation slows the speed at which enzymes change the food. By reducing or removing spoilage organisms and slowing ripening and sprouting (e.g. potato, onion, and garlic) irradiation is used to reduce the amount of food that goes bad between harvest and final use. Shelf-stable products are created by irradiating foods in sealed packages, as irradiation reduces chance of spoilage, the packaging prevents re-contamination of the final product. Foods that can tolerate the higher doses of radiation required to do so can be sterilized. This is useful for people at high risk of infection in hospitals as well as situations where proper food storage is not feasible, such as rations for astronauts.", "title": "Uses" }, { "paragraph_id": 5, "text": "Pests such as insects have been transported to new habitats through the trade in fresh produce and significantly affected agricultural production and the environment once they established themselves. To reduce this threat and enable trade across quarantine boundaries, food is irradiated using a technique called phytosanitary irradiation. Phytosanitary irradiation sterilizes the pests preventing breeding by treating the produce with low doses of irradiation (less than 1000 Gy). The higher doses required to destroy pests are not used due to either affecting the look or taste, or cannot be tolerated by fresh produce.", "title": "Uses" }, { "paragraph_id": 6, "text": "The target material is exposed to a radiation source that is separated from the target material. The radiation source supplies energetic particles or waves. As these waves/particles enter the target material they collide with other particles. The higher the likelihood of these collisions over a distance are, the lower the penetration depth of the irradiation process is as the energy is more quickly depleted.", "title": "Process" }, { "paragraph_id": 7, "text": "Around the sites of these collisions chemical bonds are broken, creating short lived radicals (e.g. the hydroxyl radical, the hydrogen atom and solvated electrons). These radicals cause further chemical changes by bonding with and or stripping particles from nearby molecules. When collisions occur in cells, cell division is often suppressed, halting or slowing the processes that cause the food to mature.", "title": "Process" }, { "paragraph_id": 8, "text": "When the process damages DNA or RNA, effective reproduction becomes unlikely halting the population growth of viruses and organisms. The distribution of the dose of radiation varies from the food surface and the interior as it is absorbed as it moves through food and depends on the energy and density of the food and the type of radiation used.", "title": "Process" }, { "paragraph_id": 9, "text": "Irradiation leaves a product with qualities (sensory and chemical) that are more similar to unprocessed food than any preservation method that can achieve a similar degree of preservation.", "title": "Process" }, { "paragraph_id": 10, "text": "Irradiated food does not become radioactive; only power levels that are incapable of causing significant induced radioactivity are used for food irradiation. In the United States this limit is deemed to be 4 mega electron volts for electron beams and x-ray sources – cobalt-60 or caesium-137 sources are never energetic enough to be of concern. Particles below this energy can never be strong enough to modify the nucleus of the targeted atom in the food, regardless of how many particles hit the target material, and so radioactivity can not be induced.", "title": "Process" }, { "paragraph_id": 11, "text": "The radiation absorbed dose is the amount energy absorbed per unit weight of the target material. Dose is used because, when the same substance is given the same dose, similar changes are observed in the target material(Gy or J/kg). Dosimeters are used to measure dose, and are small components that, when exposed to ionizing radiation, change measurable physical attributes to a degree that can be correlated to the dose received. Measuring dose (dosimetry) involves exposing one or more dosimeters along with the target material.", "title": "Process" }, { "paragraph_id": 12, "text": "For purposes of legislation doses are divided into low (up to 1 kGy), medium (1 kGy to 10 kGy), and high-dose applications (above 10 kGy). High-dose applications are above those currently permitted in the US for commercial food items by the FDA and other regulators around the world, though these doses are approved for non commercial applications, such as sterilizing frozen meat for NASA astronauts (doses of 44 kGy) and food for hospital patients.", "title": "Process" }, { "paragraph_id": 13, "text": "The ratio of the maximum dose permitted at the outer edge (Dmax) to the minimum limit to achieve processing conditions (Dmin) determines the uniformity of dose distribution. This ratio determines how uniform the irradiation process is.", "title": "Process" }, { "paragraph_id": 14, "text": "As ionising radiation passes through food, it creates a trail of chemical transformations due to radiolysis effects. Irradiation does not make foods radioactive, change food chemistry, compromise nutrient contents, or change the taste, texture, or appearance of food.", "title": "Chemical changes" }, { "paragraph_id": 15, "text": "Assessed rigorously over several decades, irradiation in commercial amounts to treat food has no negative impact on the sensory qualities and nutrient content of foods.", "title": "Chemical changes" }, { "paragraph_id": 16, "text": "Watercress (Nasturtium officinale) is a rapidly growing aquatic or semi aquatic perennial plant. Because chemical agents do not provide efficient microbial reductions, watercress has been tested with gamma irradiation treatment in order to improve both safety and the shelf life of the product. It is traditionally used on horticultural products to prevent sprouting and post-packaging contamination, delay post-harvest ripening, maturation and senescence.", "title": "Chemical changes" }, { "paragraph_id": 17, "text": "Some who advocate against food irradiation argue the long-term health effects and safety of irradiated food cannot be scientifically proven, however there have been hundreds of animal feeding studies of irradiated food performed since 1950 Endpoints include subchronic and chronic changes in metabolism, histopathology, function of most organs, reproductive effects, growth, teratogenicity, and mutagenicity.", "title": "Chemical changes" }, { "paragraph_id": 18, "text": "Up to the point where the food is processed by irradiation, the food is processed in the same way as all other food.", "title": "Industrial process" }, { "paragraph_id": 19, "text": "For some forms of treatment, packaging is used to ensure the food stuffs never come in contact with radioactive substances and prevent re-contamination of the final product. Food processors and manufacturers today struggle with using affordable, efficient packaging materials for irradiation based processing. The implementation of irradiation on prepackaged foods has been found to impact foods by inducing specific chemical alterations to the food packaging material that migrates into the food. Cross-linking in various plastics can lead to physical and chemical modifications that can increase the overall molecular weight. On the other hand, chain scission is fragmentation of polymer chains that leads to a molecular weight reduction.", "title": "Industrial process" }, { "paragraph_id": 20, "text": "To treat the food, it is exposed to a radioactive source for a set period of time to achieve a desired dose. Radiation may be emitted by a radioactive substance, or by X-ray and electron beam accelerators. Special precautions are taken to ensure the food stuffs never come in contact with the radioactive substances and that the personnel and the environment are protected from exposure radiation. Irradiation treatments are typically classified by dose (high, medium, and low), but are sometimes classified by the effects of the treatment (radappertization, radicidation and radurization). Food irradiation is sometimes referred to as \"cold pasteurization\" or \"electronic pasteurization\" because ionizing the food does not heat the food to high temperatures during the process, and the effect is similar to heat pasteurization. The term \"cold pasteurization\" is controversial because the term may be used to disguise the fact the food has been irradiated and pasteurization and irradiation are fundamentally different processes.", "title": "Industrial process" }, { "paragraph_id": 21, "text": "Gamma irradiation is produced from the radioisotopes cobalt-60 and caesium-137, which are produced by neutron irradiation of cobalt-59 (the only stable isotope of cobalt) and as a nuclear fission product, respectively. Cobalt-60 is the most common source of gamma rays for food irradiation in commercial scale facilities as it is water insoluble and hence has little risk of environmental contamination by leakage into the water systems. As for transportation of the radiation source, cobalt-60 is transported in special trucks that prevent release of radiation and meet standards mentioned in the Regulations for Safe Transport of Radioactive Materials of the International Atomic Energy Act. The special trucks must meet high safety standards and pass extensive tests to be approved to ship radiation sources. Conversely, caesium-137, is water-soluble and poses a risk of environmental contamination. Insufficient quantities are available for large scale commercial use as the vast majority of Caesium-137 produced in nuclear reactors is not extracted from spent nuclear fuel. An incident where water-soluble caesium-137 leaked into the source storage pool requiring NRC intervention has led to near elimination of this radioisotope.", "title": "Industrial process" }, { "paragraph_id": 22, "text": "Gamma irradiation is widely used due to its high penetration depth and dose uniformity, allowing for large-scale applications with high throughput. Additionally, gamma irradiation is significantly less expensive than using an X-ray source. In most designs, the radioisotope, contained in stainless steel pencils, is stored in a water-filled storage pool which absorbs the radiation energy when not in use. For treatment, the source is lifted out of the storage tank, and product contained in totes is passed around the pencils to achieve required processing.", "title": "Industrial process" }, { "paragraph_id": 23, "text": "Treatment costs vary as a function of dose and facility usage. A pallet or tote is typically exposed for several minutes to hours depending on dose. Low-dose applications such as disinfestation of fruit range between US$0.01/lbs and US$0.08/lbs while higher-dose applications can cost as much as US$0.20/lbs.", "title": "Industrial process" }, { "paragraph_id": 24, "text": "Treatment of electron beams is created as a result of high energy electrons in an accelerator that generates electrons accelerated to 99% the speed of light. This system uses electrical energy and can be powered on and off. The high power correlates with a higher throughput and lower unit cost, but electron beams have low dose uniformity and a penetration depth of centimeters. Therefore, electron beam treatment works for products that have low thickness.", "title": "Industrial process" }, { "paragraph_id": 25, "text": "X-rays are produced by bombardment of dense target material with high-energy accelerated electrons (this process is known as bremsstrahlung-conversion), giving rise to a continuous energy spectrum. Heavy metals, such as tantalum and tungsten, are used because of their high atomic numbers and high melting temperatures. Tantalum is usually preferred versus tungsten for industrial, large-area, high-power targets because it is more workable than tungsten and has a higher threshold energy for induced reactions. Like electron beams, x-rays do not require the use of radioactive materials and can be turned off when not in use. X-rays have high penetration depths and high dose uniformity but they are a very expensive source of irradiation as only 8% of the incident energy is converted into X-rays.", "title": "Industrial process" }, { "paragraph_id": 26, "text": "UV-C does not penetrate as deeply as other methods. As such, its direct antimicrobial effect is limited to the surface only. Its DNA damage effect produces cyclobutane-type pyrimidine dimers. Besides the direct effects, UV-C also induces resistance even against pathogens not yet inoculated. Some of this induced resistance is understood, being the result of temporary inactivation of self-degradation enzymes like polygalacturonase and increased expression of enzymes associated with cell wall repair.", "title": "Industrial process" }, { "paragraph_id": 27, "text": "Irradiation is a capital-intensive technology requiring a substantial initial investment, ranging from $1 million to $5 million. In the case of large research or contract irradiation facilities, major capital costs include a radiation source, hardware (irradiator, totes and conveyors, control systems, and other auxiliary equipment), land (1 to 1.5 acres), radiation shield, and warehouse. Operating costs include salaries (for fixed and variable labor), utilities, maintenance, taxes/insurance, cobalt-60 replenishment, general utilities, and miscellaneous operating costs. Perishable food items, like fruits, vegetables and meats would still require to be handled in the cold chain, so all other supply chain costs remain the same. Food manufacturers have not embraced food irradiation because the market does not support the increased price of irradiated foods, and because of potential consumer backlash due to irradiated foods.", "title": "Industrial process" }, { "paragraph_id": 28, "text": "The cost of food irradiation is influenced by dose requirements, the food's tolerance of radiation, handling conditions, i.e., packaging and stacking requirements, construction costs, financing arrangements, and other variables particular to the situation.", "title": "Industrial process" }, { "paragraph_id": 29, "text": "Irradiation has been approved by many countries. For example, in the U.S. and Canada, food irradiation has existed for decades. Food irradiation is used commercially and volumes are in general increasing at a slow rate, even in the European Union where all member countries allow the irradiation of dried herbs spices and vegetable seasonings, but only a few allow other foods to be sold as irradiated.", "title": "State of the industry" }, { "paragraph_id": 30, "text": "Although there are some consumers who choose not to purchase irradiated food, a sufficient market has existed for retailers to have continuously stocked irradiated products for years. When labeled irradiated food is offered for retail sale, consumers buy it and re-purchase it, indicating a market for irradiated foods, although there is a continuing need for consumer education.", "title": "State of the industry" }, { "paragraph_id": 31, "text": "Food scientists have concluded that any fresh or frozen food undergoing irradiation at specified doses is safe to consume, with some 60 countries using irradiation to maintain quality in their food supply.", "title": "State of the industry" }, { "paragraph_id": 32, "text": "The following risks can be mentioned:", "title": "Radurization risks" }, { "paragraph_id": 33, "text": "The Codex Alimentarius represents the global standard for irradiation of food, in particular under the WTO-agreement. Regardless of treatment source, all processing facilities must adhere to safety standards set by the International Atomic Energy Agency (IAEA), Codex Code of Practice for the Radiation Processing of Food, Nuclear Regulatory Commission (NRC), and the International Organization for Standardization (ISO). More specifically, ISO 14470 and ISO 9001 provide in-depth information regarding safety in irradiation facilities.", "title": "Standards and regulations" }, { "paragraph_id": 34, "text": "All commercial irradiation facilities contain safety systems which are designed to prevent exposure of personnel to radiation. The radiation source is constantly shielded by water, concrete, or metal. Irradiation facilities are designed with overlapping layers of protection, interlocks, and safeguards to prevent accidental radiation exposure. Meltdowns are unlikely to occur due to low heat production from sources used.", "title": "Standards and regulations" }, { "paragraph_id": 35, "text": "The provisions of the Codex Alimentarius are that any \"first generation\" product must be labeled \"irradiated\" as any product derived directly from an irradiated raw material; for ingredients the provision is that even the last molecule of an irradiated ingredient must be listed with the ingredients even in cases where the unirradiated ingredient does not appear on the label. The RADURA-logo is optional; several countries use a graphical version that differs from the Codex-version. The suggested rules for labeling is published at CODEX-STAN – 1 (2005), and includes the usage of the Radura symbol for all products that contain irradiated foods. The Radura symbol is not a designator of quality. The amount of pathogens remaining is based upon dose and the original content and the dose applied can vary on a product by product basis.", "title": "Standards and regulations" }, { "paragraph_id": 36, "text": "The European Union follows the Codex's provision to label irradiated ingredients down to the last molecule of irradiated food. The European Union does not provide for the use of the Radura logo and relies exclusively on labeling by the appropriate phrases in the respective languages of the Member States. The European Union enforces its irradiation labeling laws by requiring its member countries to perform tests on a cross section of food items in the market-place and to report to the European Commission. The results are published annually on EUR-Lex.", "title": "Standards and regulations" }, { "paragraph_id": 37, "text": "The US defines irradiated foods as foods in which the irradiation causes a material change in the food, or a material change in the consequences that may result from the use of the food. Therefore, food that is processed as an ingredient by a restaurant or food processor is exempt from the labeling requirement in the US. All irradiated foods must include a prominent Radura symbol followed in addition to the statement \"treated with irradiation\" or \"treated by irradiation. Bulk foods must be individually labeled with the symbol and statement or, alternatively, the Radura and statement should be located next to the sale container.", "title": "Standards and regulations" }, { "paragraph_id": 38, "text": "Under section 409 of the Federal Food, Drug, and Cosmetic Act, irradiation of prepackaged foods requires premarket approval for not only the irradiation source for a specific food but also for the food packaging material. Approved packaging materials include various plastic films, yet does not cover a variety of polymers and adhesive based materials that have been found to meet specific standards. The lack of packaging material approval limits manufacturers production and expansion of irradiated prepackaged foods.", "title": "Standards and regulations" }, { "paragraph_id": 39, "text": "Approved materials by FDA for Irradiation according to 21 CFR 179.45:", "title": "Standards and regulations" }, { "paragraph_id": 40, "text": "In 2003, the Codex Alimentarius removed any upper dose limit for food irradiation as well as clearances for specific foods, declaring that all are safe to irradiate. Countries such as Pakistan and Brazil have adopted the Codex without any reservation or restriction.", "title": "Standards and regulations" }, { "paragraph_id": 41, "text": "Standards that describe calibration and operation for radiation dosimetry, as well as procedures to relate the measured dose to the effects achieved and to report and document such results, are maintained by the American Society for Testing and Materials (ASTM international) and are also available as ISO/ASTM standards.", "title": "Standards and regulations" }, { "paragraph_id": 42, "text": "All of the rules involved in processing food are applied to all foods before they are irradiated.", "title": "Standards and regulations" }, { "paragraph_id": 43, "text": "The U.S. Food and Drug Administration (FDA) is the agency responsible for regulation of radiation sources in the United States. Irradiation, as defined by the FDA is a \"food additive\" as opposed to a food process and therefore falls under the food additive regulations. Each food approved for irradiation has specific guidelines in terms of minimum and maximum dosage as determined safe by the FDA. Packaging materials containing the food processed by irradiation must also undergo approval. The United States Department of Agriculture (USDA) amends these rules for use with meat, poultry, and fresh fruit.", "title": "Standards and regulations" }, { "paragraph_id": 44, "text": "The United States Department of Agriculture (USDA) has approved the use of low-level irradiation as an alternative treatment to pesticides for fruits and vegetables that are considered hosts to a number of insect pests, including fruit flies and seed weevils. Under bilateral agreements that allows less-developed countries to earn income through food exports agreements are made to allow them to irradiate fruits and vegetables at low doses to kill insects, so that the food can avoid quarantine.", "title": "Standards and regulations" }, { "paragraph_id": 45, "text": "The U.S. Food and Drug Administration and the U.S. Department of Agriculture have approved irradiation of the following foods and purposes:", "title": "Standards and regulations" }, { "paragraph_id": 46, "text": "European law stipulates that all member countries must allow the sale of irradiated dried aromatic herbs, spices and vegetable seasonings. However, these Directives allow Member States to maintain previous clearances food categories the EC's Scientific Committee on Food (SCF) had previously approved (the approval body is now the European Food Safety Authority). Presently, Belgium, Czech Republic, France, Italy, Netherlands, and Poland allow the sale of many different types of irradiated foods. Before individual items in an approved class can be added to the approved list, studies into the toxicology of each of such food and for each of the proposed dose ranges are requested. It also states that irradiation shall not be used \"as a substitute for hygiene or health practices or good manufacturing or agricultural practice\". These Directives only control food irradiation for food retail and their conditions and controls are not applicable to the irradiation of food for patients requiring sterile diets. In 2021 the most common food items irradiated were frog legs at 65.1%, poultry 20.6% and dried aromatic herbs, spices and vegetables seasoning.", "title": "Standards and regulations" }, { "paragraph_id": 47, "text": "Due to the European Single Market, any food, even if irradiated, must be allowed to be marketed in any other member state even if a general ban of food irradiation prevails, under the condition that the food has been irradiated legally in the state of origin.", "title": "Standards and regulations" }, { "paragraph_id": 48, "text": "Furthermore, imports into the EC are possible from third countries if the irradiation facility had been inspected and approved by the EC and the treatment is legal within the EC or some Member state.", "title": "Standards and regulations" }, { "paragraph_id": 49, "text": "In Australia, following cat deaths after irradiated cat food consumption and producer's voluntary recall, cat food irradiation was banned.", "title": "Standards and regulations" }, { "paragraph_id": 50, "text": "Interlocks and safeguards are mandated to minimize this risk. There have been radiation-related accidents, deaths, and injury at such facilities, many of them caused by operators overriding the safety related interlocks. In a radiation processing facility, radiation specific concerns are supervised by special authorities, while \"Ordinary\" occupational safety regulations are handled much like other businesses.", "title": "Standards and regulations" }, { "paragraph_id": 51, "text": "The safety of irradiation facilities is regulated by the United Nations International Atomic Energy Agency and monitored by the different national Nuclear Regulatory Commissions. The regulators enforce a safety culture that mandates that all incidents that occur are documented and thoroughly analyzed to determine the cause and improvement potential. Such incidents are studied by personnel at multiple facilities, and improvements are mandated to retrofit existing facilities and future design.", "title": "Standards and regulations" }, { "paragraph_id": 52, "text": "In the US the Nuclear Regulatory Commission (NRC) regulates the safety of the processing facility, and the United States Department of Transportation (DOT) regulates the safe transport of the radioactive sources.", "title": "Standards and regulations" }, { "paragraph_id": 53, "text": "The word \"radurization\" is derived from radura, combining the initial letters of the word \"radiation\" with the stem of \"durus\", the Latin word for hard, lasting.", "title": "Origin of the word \"Radurization\"" } ]
Food irradiation is the process of exposing food and food packaging to ionizing radiation, such as from gamma rays, x-rays, or electron beams. Food irradiation improves food safety and extends product shelf life (preservation) by effectively destroying organisms responsible for spoilage and foodborne illness, inhibits sprouting or ripening, and is a means of controlling insects and invasive pests. In the US, consumer perception of foods treated with irradiation is more negative than those processed by other means. The U.S. Food and Drug Administration (FDA), the World Health Organization (WHO), the Centers for Disease Control and Prevention (CDC), and U.S. Department of Agriculture (USDA) have performed studies that confirm irradiation to be safe. In order for a food to be irradiated in the US, the FDA will still require that the specific food be thoroughly tested for irradiation safety. Food irradiation is permitted in over 60 countries, and about 500,000 metric tons of food are processed annually worldwide. The regulations for how food is to be irradiated, as well as the foods allowed to be irradiated, vary greatly from country to country. In Austria, Germany, and many other countries of the European Union only dried herbs, spices, and seasonings can be processed with irradiation and only at a specific dose, while in Brazil all foods are allowed at any dose.
2001-12-14T13:07:42Z
2023-12-10T12:40:22Z
[ "Template:Short description", "Template:See also", "Template:Webarchive", "Template:Circa", "Template:Citation needed", "Template:Reflist", "Template:Cite book", "Template:Wikibooks", "Template:Portal", "Template:Cite journal", "Template:ISBN", "Template:Consumer Food Safety", "Template:Food preservation", "Template:Use mdy dates", "Template:Further", "Template:Cite web", "Template:Cite conference", "Template:Dead link", "Template:Commons", "Template:Wiktionary" ]
https://en.wikipedia.org/wiki/Food_irradiation
15,378
Copper IUD
A copper intrauterine device (IUD), also known as an intrauterine coil or copper coil or non-hormonal IUD, is a type of intrauterine device which contains copper. It is used for birth control and emergency contraception within five days of unprotected sex. It is one of the most effective forms of birth control with a one-year failure rate around 0.7%. The device is placed in the uterus and lasts up to twelve years. It may be used by women of all ages regardless of whether or not they have had children. Following removal, fertility quickly returns. Side effects may be heavy menstrual periods, and/or rarely the device may come out. It is less recommended for people at high risk of sexually transmitted infections as it may increase the risk of pelvic inflammatory disease in the first three weeks after insertion. It is recommended for people who don't tolerate or hardly tolerate hormonal contraceptives. If a woman becomes pregnant with an IUD in place removal is recommended. Very rarely, uterine perforation may occur during insertion if not done properly. The copper IUD is a type of long-acting reversible birth control. It primarily works by killing the sperm. The copper IUD came into medical use in the 1970s. It is on the World Health Organization's List of Essential Medicines. They are used by more than 170 million women globally. Copper IUDs are a form of long-acting reversible contraception and are one of the most effective forms of birth control available. The type of frame and amount of copper can affect the effectiveness of different copper IUD models. The failure rates for different models vary between 0.1 and 2.2% after 1 year of use. The T-shaped models with a surface area of 380 mm² of copper have the lowest failure rates. The TCu 380A (ParaGard) has a one-year failure rate of 0.8% and a cumulative 12-year failure rate of 2.2%. Over 12 years of use, the models with less surface area of copper have higher failure rates. The TCu 220A has a 12-year failure rate of 5.8%. The frameless GyneFix has a failure rate of less than 1% per year. Worldwide, older IUD models with lower effectiveness rates are no longer produced. Unlike other forms of reversible contraception, the typical use failure rate and the perfect use failure rate for the copper IUDs are the same because the IUD does not depend on user action. A 2008 review of the available T-shaped copper IUDs recommended that the TCu 380A and the TCu 280S be used as the first choice for copper IUDs because those two models have the lowest failure rates and the longest lifespans. The effectiveness of the copper IUD (failure rate of 0.8%) is comparable to tubal sterilization (failure rate of 0.5%) for the first year. It was first discovered in 1976 that the copper IUD could be used as a form of emergency contraception (EC). The copper IUD is the most effective form of emergency contraception. It is more effective than the hormonal EC pills currently available. The pregnancy rate among those using the copper IUD for EC is 0.09%. It can be used for EC up to five days after the act of unprotected sex and does not decrease in effectiveness during the five days. An additional advantage of using the copper IUD for emergency contraception is that it can be used as a form of birth control for 10–12 years after insertion. Removal of the copper IUD should also be performed by a qualified medical practitioner. Fertility has been shown to return to previous levels quickly after removal of the device. One study found that the median amount of time from removal to planned pregnancy was three months for those women using the TCu 380Ag. Expulsion: Sometimes, the copper IUD can be spontaneously expelled from the uterus. Expulsion rates can range from 2.2% to 11.4% of users from the first year to the 10th year. The TCu380A may have lower rates of expulsion than other models. Unusual vaginal discharge, cramping or pain, spotting between periods, postcoital (after sex) spotting, dyspareunia, or the absence or lengthening of the strings can be signs of a possible expulsion. If expulsion occurs, the woman is not protected against pregnancy. If an IUD with copper is inserted after an expulsion has occurred, the risk of re-expulsion has been estimated in one study to be approximately one third of cases after one year. Magnetic resonance imaging (MRI) may cause dislocation of a copper IUD, and it is therefore recommended to check the location of the IUD both before and after MRI. Perforation: Very rarely, the IUD can move through the wall of the uterus. Risk of perforation is mostly determined by the skill of the practitioner performing the insertion. For experienced medical practitioners, the risk of perforation is 1 per 1,000 insertions or less. Infection: The insertion of a copper IUD poses a transient risk of pelvic inflammatory disease (PID) in the first 21 days after insertion. However, it is a small risk and is attributable to preexisting gonorrhea or chlamydia infection at the time of insertion, and not to the IUD itself. Proper infection prevention procedures have little or no effect on the course of gonorrhea or chlamydia infections but are important in helping protect both clients and providers from infection in general. Such infection prevention practices include washing hands and then putting on gloves, cleaning the cervix and vagina, making minimal contact with non-sterile surfaces (using a no-touch insertion technique), and, after the procedure, washing hands again and then processing instruments. The device itself carries no increased risk of PID beyond the time of insertion. Cramping: Some women can feel cramping during the IUD insertion process and immediately after as a result of cervix dilation during insertion. Taking NSAIDs before the procedure often reduces discomfort, as the use of a local anaesthetic. Misoprostol 6 to 12 hrs before insertion can help with cervical dilation. Some women may have cramps for 1 to 2 weeks following insertion. Heavier periods: The copper IUD may increase the amount of blood flow during a woman's menstrual periods. On average, menstrual blood loss may increase by 20–50% after insertion of a copper-T IUD; This symptom may clear up for some women after 3 to 6 months. Irregular bleeding and spotting: For some women, the copper IUD may cause spotting between periods during the first 3 to 6 months after insertion. Pregnancy: Although rare, if pregnancy does occur with the copper IUD in place there can be side effects. The risk of ectopic pregnancy to a woman using an IUD is lower than the risk of ectopic pregnancy to a woman using no form of birth control. However, of pregnancies that do occur during IUD use, a higher than expected percentage (3–4%) are ectopic. If pregnancy occurs with the IUD in place there is a higher risk of miscarriage or early delivery. If this occurs and the IUD strings are visible, the IUD should be removed immediately by a clinician. Although the Dalkon Shield IUD was associated with septic abortions (infections associated with miscarriage), other brands of IUD are not. IUDs are also not associated with birth defects. Some barrier contraceptives protect against STIs. Hormonal contraceptives reduce the risk of developing pelvic inflammatory disease (PID), a serious complication of certain STIs. IUDs, by contrast, do not protect against STIs or PID. Copper Toxicity: There exists anecdotal evidence linking copper IUDs to cases of copper toxicity. A category 3 condition indicates conditions where the theoretical or proven risks usually outweigh the advantages of inserting a copper IUD. A category 4 condition indicates conditions that represent an unacceptable health risk if a copper IUD is inserted. Women should not use a copper IUD if they: (Category 4) (Category 3) A full list of contraindications can be found in the World Health Organization (WHO) Medical Eligibility Criteria for Contraceptive Use and the Centers for Disease Control and Prevention (CDC) United States Medical Eligibility Criteria for Contraceptive Use. Being a nulliparous women (women who have never given birth) is not a contraindication for IUD use. IUDs are safe and acceptable even in young nulliparous women. There are a number of models of copper IUDs available around the world. Most copper devices consist of a plastic core that is wrapped in a copper wire. Many of the devices have a T-shape similar to the hormonal IUD. However, there are "frameless" copper IUDs available as well. ParaGard is the only model currently available in the United States. At least three copper IUD models are available in Canada, two of which are slimmer T-shape versions used for women who have not had children. Early copper IUDs had copper around only the vertical stem, but more recent models have copper sleeves wrapped around the horizontal arms as well, increasing effectiveness. Some newer models also contain a silver core instead of a plastic core to delay copper fragmentation as well as increase the lifespan of the device. The lifespan of the devices range from 3 years to 10 years; however, some studies have demonstrated that the TCu 380A may be effective through 12 years. Its ATC code is G02BA (WHO). A copper IUD can be inserted at any phase of the menstrual cycle, but the optimal time is right after the menstrual period when the cervix is softest and the woman is least likely to be pregnant. The insertion process generally takes five minutes or less. The procedure can cause cramping or be painful for some women. Before placement of an IUD, a medical history and physical examination by a medical professional is useful to check for any contraindications or concerns. It is also recommended by some clinicians that patients be tested for gonorrhea and chlamydia, as these two infections increase the risk of contracting pelvic inflammatory disease shortly after insertion. Immediately prior to insertion, the clinician will perform a pelvic exam to determine the position of the uterus. After the pelvic exam, the vagina is held open with a speculum. A tenaculum is used to steady the cervix and uterus. Uterine sounding may be used to measure the length and direction of the cervical canal and uterus in order to decrease the risk of uterine perforation. The IUD is placed using a narrow tube, which is inserted through the cervix into the uterus. Short monofilament plastic/nylon strings hang down from the uterus into the vagina. The clinician will trim the threads so that they only protrude 3 to 4 cm out of the cervix and remain in the upper vagina. The strings allow the patient or clinician to periodically check to ensure the IUD is still in place and to enable easy removal of the device. The copper IUD can be inserted at any time in a woman's menstrual cycle as long as the woman is not pregnant. An IUD can also be inserted immediately postpartum and post-abortion as long as no infection has occurred. Breastfeeding is not a contraindication for the use of the copper IUD. The IUD can be inserted in women with HIV or AIDS as it does not increase the risk of transmission. Although previously not recommended for nulliparous women (women who have not had children), the IUD is now recommended for most women who are past menarche (their first period), including adolescents. After the insertion is finished, normal activities such as sex, exercise, and swimming can be performed as soon as it feels comfortable. Strenuous physical activity does not affect the position of the IUD. Many different types of copper IUDs are currently manufactured worldwide, but availability varies by country. In the United States, only one type of copper IUD is approved for use, while in the United Kingdom, over ten varieties are available. One company, Mona Lisa N.V., offers generic versions of many existing IUDs. The frameless IUD eliminates the use of the frame that gives conventional IUDs their signature T-shape. This change in design was made to reduce discomfort and expulsion associated with prior IUDs; without a solid frame, the frameless IUD should mold to the shape of the uterus. It may reduce expulsion and discontinuation rates compared to framed copper IUDs. Gynefix is the only frameless IUD brand currently available. It consists of hollow copper tubes on a polypropylene thread. It is inserted through the cervix with a special applicator that anchors the thread to the fundus (top) of the uterus; the thread is then cut with a tail hanging outside of the cervix, similar to framed IUDs, or looped back into the cervical canal for patient comfort. When this tail is pulled, the anchor is released and the device can be removed. This requires more force than removing a T-shaped IUD and results in comparable discomfort during removal. The copper IUD's primary mechanism of action is to prevent fertilization. Copper acts as a spermicide within the uterus. The presence of copper increases the levels of copper ions, prostaglandins, and white blood cells within the uterine and tubal fluids. Although not a primary mechanism of action, some experts in human reproduction believe there is sufficient evidence to suggest that IUDs with copper can disrupt implantation, especially when used for emergency contraception. Despite this, there has been no definitive evidence that IUD users have higher rates of embryonic loss than women not using contraception. Therefore, the copper IUD is considered to be a true contraceptive and not an abortifacient. Globally, the IUD is the most widely used method of reversible birth control. The most recent data indicates that there are 169 million IUD users around the world. This includes both the nonhormonal and hormonal IUDs. IUDs are most popular in Asia, where the prevalence is almost 30%. In Africa and Europe, the prevalence is around 20%. As of 2009, levels of IUD use in the United States are estimated to be 5.5%. Data in the United States does not distinguish between hormonal and non-hormonal IUDs. In Europe, copper IUD prevalence ranges from under 5% in the United Kingdom to over 10% in Denmark in 2006. According to popular legend, Arab traders inserted small stones into the uteruses of their camels to prevent pregnancy during long desert treks. The story was originally a tall tale to entertain delegates at a scientific conference on family planning; although it was later repeated as truth, it has no known historical basis. Precursors to IUDs were first marketed in 1902. Developed from stem pessaries (where the stem held the pessary in place over the cervix), the 'stem' on these devices actually extended into the uterus itself. Because they occupied both the vagina and the uterus, this type of stem pessary was also known as an intrauterine device. The use of intrauterine devices was associated with high rates of infection; for this reason, they were condemned by the medical community. The first intrauterine device (contained entirely in the uterus) was described in a German publication in 1909, although the author appears to have never marketed his product. In 1929, Ernst Gräfenberg of Germany published a report on an IUD made of silk sutures. He had found a 3% pregnancy rate among 1,100 women using his ring. In 1930, Gräfenberg reported a lower pregnancy rate of 1.6% among 600 women using an improved ring wrapped in silver wire. Unbeknownst to Gräfenberg, the silver wire was contaminated with 26% copper. Copper's role in increasing IUD efficacy would not be recognized until nearly 40 years later. In 1934, Japanese physician Tenrei Ota developed a variation of Gräfenberg's ring that contained a supportive structure in the center. The addition of this central disc lowered the IUD's expulsion rate. These devices still had high rates of infection, and their use and development were further stifled by World War II politics: contraception was forbidden in both Nazi Germany and Axis-allied Japan. The Allies did not learn of the work by Gräfenberg and Ota until well after the war ended. The first plastic IUD, the Margulies Coil or Margulies Spiral, was introduced in 1958. This device was somewhat large, causing discomfort to a large proportion of women users, and had a hard plastic tail, causing discomfort to their male partners. The modern colloquialism "coil" is based on the coil-shaped design of early IUDs. The Lippes Loop, a slightly smaller device with a monofilament tail, was introduced in 1962 and gained in popularity over the Margulies device. The stainless steel single-ring IUD was developed in the 1970s and widely used in China because of low manufacturing costs. The Chinese government banned production of steel IUDs in 1993 due to high failure rates (up to 10% per year). Howard Tatum, in the US, conceived the plastic T-shaped IUD in 1968. Shortly thereafter Jaime Zipper, in Chile, introduced the idea of adding copper to the devices to improve their contraceptive effectiveness. It was found that copper-containing devices could be made in smaller sizes without compromising effectiveness, resulting in fewer side effects such as pain and bleeding. T-shaped devices had lower rates of expulsion due to their greater similarity to the shape of the uterus. The poorly designed Dalkon Shield plastic IUD (which had a multifilament tail) was manufactured by the A. H. Robins Company and sold by Robins in the United States for three and a half years from January 1971 through June 1974, before sales were suspended by Robins on June 28, 1974, at the request of the FDA because of safety concerns following reports of 110 septic spontaneous abortions in women with the Dalkon Shield in place, seven of whom had died. Robins stopped international sales of the Dalkon Shield in April 1975. Tatum developed many different models of the copper IUD. He created the TCu220 C, which had copper collars as opposed to a copper filament, which prevented metal loss and increased the lifespan of the device. Second-generation copper-T IUDs were also introduced in the 1970s. These devices had higher surface areas of copper, and for the first time consistently achieved effectiveness rates of greater than 99%. The last model Tatum developed was the TCu380A, the model that is most recommended today. The Paragard T-380A is an IUD with copper, manufactured and marketed in the United States by The Cooper Companies. It is the only copper-containing intrauterine device approved for use in the U.S. (three hormonal uterine devices, Mirena, Skyla and Liletta are also approved). The Paragard consists of a T-shaped polyethylene frame wound with copper wire, along with two monofilament threads to aid in the removal of the IUD. The Paragard T 380A was developed in the 1970s by the Population Council and Finishing Enterprises Inc. (FEI). The Population Council's Paragard new drug application (NDA) was approved by the U.S. Food and Drug Administration (FDA) and FEI began manufacturing it for distribution outside the United States in 1984. GynoPharma (originally GynoMed) began marketing it in the U.S. in May 1988. On August 2, 1995, Ortho-McNeil acquired GynoPharma and began marketing Paragard in the U.S. On January 1, 2004, FEI Women's Health acquired the patent from the Population Council and U.S. marketing rights from Ortho-McNeil. On November 10, 2005, Duramed Pharmaceuticals, a subsidiary of Barr Pharmaceuticals, acquired FEI Women's Health and Paragard. On July 18, 2008, it was announced that Teva Pharmaceutical Industries Ltd. would acquire Barr Pharmaceuticals. On November 1, 2017, The Cooper Companies acquired Paragard from Teva Pharmaceutical Industries for approximately $1.1 billion. The original FDA approval of Paragard in 1984 was for 4 years continuous use, this was later extended to 6 years in 1989, then 8 years in 1991, then 10 years in 1994. (ATC code G02BA02 (WHO))
[ { "paragraph_id": 0, "text": "A copper intrauterine device (IUD), also known as an intrauterine coil or copper coil or non-hormonal IUD, is a type of intrauterine device which contains copper. It is used for birth control and emergency contraception within five days of unprotected sex. It is one of the most effective forms of birth control with a one-year failure rate around 0.7%. The device is placed in the uterus and lasts up to twelve years. It may be used by women of all ages regardless of whether or not they have had children. Following removal, fertility quickly returns.", "title": "" }, { "paragraph_id": 1, "text": "Side effects may be heavy menstrual periods, and/or rarely the device may come out. It is less recommended for people at high risk of sexually transmitted infections as it may increase the risk of pelvic inflammatory disease in the first three weeks after insertion. It is recommended for people who don't tolerate or hardly tolerate hormonal contraceptives. If a woman becomes pregnant with an IUD in place removal is recommended. Very rarely, uterine perforation may occur during insertion if not done properly. The copper IUD is a type of long-acting reversible birth control. It primarily works by killing the sperm.", "title": "" }, { "paragraph_id": 2, "text": "The copper IUD came into medical use in the 1970s. It is on the World Health Organization's List of Essential Medicines. They are used by more than 170 million women globally.", "title": "" }, { "paragraph_id": 3, "text": "Copper IUDs are a form of long-acting reversible contraception and are one of the most effective forms of birth control available. The type of frame and amount of copper can affect the effectiveness of different copper IUD models. The failure rates for different models vary between 0.1 and 2.2% after 1 year of use. The T-shaped models with a surface area of 380 mm² of copper have the lowest failure rates. The TCu 380A (ParaGard) has a one-year failure rate of 0.8% and a cumulative 12-year failure rate of 2.2%. Over 12 years of use, the models with less surface area of copper have higher failure rates. The TCu 220A has a 12-year failure rate of 5.8%. The frameless GyneFix has a failure rate of less than 1% per year. Worldwide, older IUD models with lower effectiveness rates are no longer produced.", "title": "Medical uses" }, { "paragraph_id": 4, "text": "Unlike other forms of reversible contraception, the typical use failure rate and the perfect use failure rate for the copper IUDs are the same because the IUD does not depend on user action. A 2008 review of the available T-shaped copper IUDs recommended that the TCu 380A and the TCu 280S be used as the first choice for copper IUDs because those two models have the lowest failure rates and the longest lifespans. The effectiveness of the copper IUD (failure rate of 0.8%) is comparable to tubal sterilization (failure rate of 0.5%) for the first year.", "title": "Medical uses" }, { "paragraph_id": 5, "text": "It was first discovered in 1976 that the copper IUD could be used as a form of emergency contraception (EC). The copper IUD is the most effective form of emergency contraception. It is more effective than the hormonal EC pills currently available. The pregnancy rate among those using the copper IUD for EC is 0.09%. It can be used for EC up to five days after the act of unprotected sex and does not decrease in effectiveness during the five days. An additional advantage of using the copper IUD for emergency contraception is that it can be used as a form of birth control for 10–12 years after insertion.", "title": "Medical uses" }, { "paragraph_id": 6, "text": "Removal of the copper IUD should also be performed by a qualified medical practitioner. Fertility has been shown to return to previous levels quickly after removal of the device. One study found that the median amount of time from removal to planned pregnancy was three months for those women using the TCu 380Ag.", "title": "Medical uses" }, { "paragraph_id": 7, "text": "Expulsion: Sometimes, the copper IUD can be spontaneously expelled from the uterus. Expulsion rates can range from 2.2% to 11.4% of users from the first year to the 10th year. The TCu380A may have lower rates of expulsion than other models. Unusual vaginal discharge, cramping or pain, spotting between periods, postcoital (after sex) spotting, dyspareunia, or the absence or lengthening of the strings can be signs of a possible expulsion. If expulsion occurs, the woman is not protected against pregnancy. If an IUD with copper is inserted after an expulsion has occurred, the risk of re-expulsion has been estimated in one study to be approximately one third of cases after one year. Magnetic resonance imaging (MRI) may cause dislocation of a copper IUD, and it is therefore recommended to check the location of the IUD both before and after MRI.", "title": "Side effects" }, { "paragraph_id": 8, "text": "Perforation: Very rarely, the IUD can move through the wall of the uterus. Risk of perforation is mostly determined by the skill of the practitioner performing the insertion. For experienced medical practitioners, the risk of perforation is 1 per 1,000 insertions or less.", "title": "Side effects" }, { "paragraph_id": 9, "text": "Infection: The insertion of a copper IUD poses a transient risk of pelvic inflammatory disease (PID) in the first 21 days after insertion. However, it is a small risk and is attributable to preexisting gonorrhea or chlamydia infection at the time of insertion, and not to the IUD itself. Proper infection prevention procedures have little or no effect on the course of gonorrhea or chlamydia infections but are important in helping protect both clients and providers from infection in general. Such infection prevention practices include washing hands and then putting on gloves, cleaning the cervix and vagina, making minimal contact with non-sterile surfaces (using a no-touch insertion technique), and, after the procedure, washing hands again and then processing instruments. The device itself carries no increased risk of PID beyond the time of insertion.", "title": "Side effects" }, { "paragraph_id": 10, "text": "Cramping: Some women can feel cramping during the IUD insertion process and immediately after as a result of cervix dilation during insertion. Taking NSAIDs before the procedure often reduces discomfort, as the use of a local anaesthetic. Misoprostol 6 to 12 hrs before insertion can help with cervical dilation. Some women may have cramps for 1 to 2 weeks following insertion.", "title": "Side effects" }, { "paragraph_id": 11, "text": "Heavier periods: The copper IUD may increase the amount of blood flow during a woman's menstrual periods. On average, menstrual blood loss may increase by 20–50% after insertion of a copper-T IUD; This symptom may clear up for some women after 3 to 6 months.", "title": "Side effects" }, { "paragraph_id": 12, "text": "Irregular bleeding and spotting: For some women, the copper IUD may cause spotting between periods during the first 3 to 6 months after insertion.", "title": "Side effects" }, { "paragraph_id": 13, "text": "Pregnancy: Although rare, if pregnancy does occur with the copper IUD in place there can be side effects. The risk of ectopic pregnancy to a woman using an IUD is lower than the risk of ectopic pregnancy to a woman using no form of birth control. However, of pregnancies that do occur during IUD use, a higher than expected percentage (3–4%) are ectopic. If pregnancy occurs with the IUD in place there is a higher risk of miscarriage or early delivery. If this occurs and the IUD strings are visible, the IUD should be removed immediately by a clinician. Although the Dalkon Shield IUD was associated with septic abortions (infections associated with miscarriage), other brands of IUD are not. IUDs are also not associated with birth defects.", "title": "Side effects" }, { "paragraph_id": 14, "text": "Some barrier contraceptives protect against STIs. Hormonal contraceptives reduce the risk of developing pelvic inflammatory disease (PID), a serious complication of certain STIs. IUDs, by contrast, do not protect against STIs or PID.", "title": "Side effects" }, { "paragraph_id": 15, "text": "Copper Toxicity: There exists anecdotal evidence linking copper IUDs to cases of copper toxicity.", "title": "Side effects" }, { "paragraph_id": 16, "text": "A category 3 condition indicates conditions where the theoretical or proven risks usually outweigh the advantages of inserting a copper IUD. A category 4 condition indicates conditions that represent an unacceptable health risk if a copper IUD is inserted.", "title": "Side effects" }, { "paragraph_id": 17, "text": "Women should not use a copper IUD if they:", "title": "Side effects" }, { "paragraph_id": 18, "text": "(Category 4)", "title": "Side effects" }, { "paragraph_id": 19, "text": "(Category 3)", "title": "Side effects" }, { "paragraph_id": 20, "text": "A full list of contraindications can be found in the World Health Organization (WHO) Medical Eligibility Criteria for Contraceptive Use and the Centers for Disease Control and Prevention (CDC) United States Medical Eligibility Criteria for Contraceptive Use.", "title": "Side effects" }, { "paragraph_id": 21, "text": "Being a nulliparous women (women who have never given birth) is not a contraindication for IUD use. IUDs are safe and acceptable even in young nulliparous women.", "title": "Side effects" }, { "paragraph_id": 22, "text": "There are a number of models of copper IUDs available around the world. Most copper devices consist of a plastic core that is wrapped in a copper wire. Many of the devices have a T-shape similar to the hormonal IUD. However, there are \"frameless\" copper IUDs available as well. ParaGard is the only model currently available in the United States. At least three copper IUD models are available in Canada, two of which are slimmer T-shape versions used for women who have not had children. Early copper IUDs had copper around only the vertical stem, but more recent models have copper sleeves wrapped around the horizontal arms as well, increasing effectiveness.", "title": "Device description" }, { "paragraph_id": 23, "text": "Some newer models also contain a silver core instead of a plastic core to delay copper fragmentation as well as increase the lifespan of the device. The lifespan of the devices range from 3 years to 10 years; however, some studies have demonstrated that the TCu 380A may be effective through 12 years.", "title": "Device description" }, { "paragraph_id": 24, "text": "Its ATC code is G02BA (WHO).", "title": "Device description" }, { "paragraph_id": 25, "text": "A copper IUD can be inserted at any phase of the menstrual cycle, but the optimal time is right after the menstrual period when the cervix is softest and the woman is least likely to be pregnant. The insertion process generally takes five minutes or less. The procedure can cause cramping or be painful for some women. Before placement of an IUD, a medical history and physical examination by a medical professional is useful to check for any contraindications or concerns. It is also recommended by some clinicians that patients be tested for gonorrhea and chlamydia, as these two infections increase the risk of contracting pelvic inflammatory disease shortly after insertion.", "title": "Device description" }, { "paragraph_id": 26, "text": "Immediately prior to insertion, the clinician will perform a pelvic exam to determine the position of the uterus. After the pelvic exam, the vagina is held open with a speculum. A tenaculum is used to steady the cervix and uterus. Uterine sounding may be used to measure the length and direction of the cervical canal and uterus in order to decrease the risk of uterine perforation. The IUD is placed using a narrow tube, which is inserted through the cervix into the uterus. Short monofilament plastic/nylon strings hang down from the uterus into the vagina. The clinician will trim the threads so that they only protrude 3 to 4 cm out of the cervix and remain in the upper vagina. The strings allow the patient or clinician to periodically check to ensure the IUD is still in place and to enable easy removal of the device.", "title": "Device description" }, { "paragraph_id": 27, "text": "The copper IUD can be inserted at any time in a woman's menstrual cycle as long as the woman is not pregnant. An IUD can also be inserted immediately postpartum and post-abortion as long as no infection has occurred. Breastfeeding is not a contraindication for the use of the copper IUD. The IUD can be inserted in women with HIV or AIDS as it does not increase the risk of transmission. Although previously not recommended for nulliparous women (women who have not had children), the IUD is now recommended for most women who are past menarche (their first period), including adolescents.", "title": "Device description" }, { "paragraph_id": 28, "text": "After the insertion is finished, normal activities such as sex, exercise, and swimming can be performed as soon as it feels comfortable. Strenuous physical activity does not affect the position of the IUD.", "title": "Device description" }, { "paragraph_id": 29, "text": "Many different types of copper IUDs are currently manufactured worldwide, but availability varies by country. In the United States, only one type of copper IUD is approved for use, while in the United Kingdom, over ten varieties are available. One company, Mona Lisa N.V., offers generic versions of many existing IUDs.", "title": "Device description" }, { "paragraph_id": 30, "text": "The frameless IUD eliminates the use of the frame that gives conventional IUDs their signature T-shape. This change in design was made to reduce discomfort and expulsion associated with prior IUDs; without a solid frame, the frameless IUD should mold to the shape of the uterus. It may reduce expulsion and discontinuation rates compared to framed copper IUDs.", "title": "Device description" }, { "paragraph_id": 31, "text": "Gynefix is the only frameless IUD brand currently available. It consists of hollow copper tubes on a polypropylene thread. It is inserted through the cervix with a special applicator that anchors the thread to the fundus (top) of the uterus; the thread is then cut with a tail hanging outside of the cervix, similar to framed IUDs, or looped back into the cervical canal for patient comfort. When this tail is pulled, the anchor is released and the device can be removed. This requires more force than removing a T-shaped IUD and results in comparable discomfort during removal.", "title": "Device description" }, { "paragraph_id": 32, "text": "The copper IUD's primary mechanism of action is to prevent fertilization. Copper acts as a spermicide within the uterus. The presence of copper increases the levels of copper ions, prostaglandins, and white blood cells within the uterine and tubal fluids.", "title": "Mechanism of action" }, { "paragraph_id": 33, "text": "Although not a primary mechanism of action, some experts in human reproduction believe there is sufficient evidence to suggest that IUDs with copper can disrupt implantation, especially when used for emergency contraception. Despite this, there has been no definitive evidence that IUD users have higher rates of embryonic loss than women not using contraception. Therefore, the copper IUD is considered to be a true contraceptive and not an abortifacient.", "title": "Mechanism of action" }, { "paragraph_id": 34, "text": "Globally, the IUD is the most widely used method of reversible birth control. The most recent data indicates that there are 169 million IUD users around the world. This includes both the nonhormonal and hormonal IUDs. IUDs are most popular in Asia, where the prevalence is almost 30%. In Africa and Europe, the prevalence is around 20%. As of 2009, levels of IUD use in the United States are estimated to be 5.5%. Data in the United States does not distinguish between hormonal and non-hormonal IUDs. In Europe, copper IUD prevalence ranges from under 5% in the United Kingdom to over 10% in Denmark in 2006.", "title": "Usage" }, { "paragraph_id": 35, "text": "According to popular legend, Arab traders inserted small stones into the uteruses of their camels to prevent pregnancy during long desert treks. The story was originally a tall tale to entertain delegates at a scientific conference on family planning; although it was later repeated as truth, it has no known historical basis.", "title": "History" }, { "paragraph_id": 36, "text": "Precursors to IUDs were first marketed in 1902. Developed from stem pessaries (where the stem held the pessary in place over the cervix), the 'stem' on these devices actually extended into the uterus itself. Because they occupied both the vagina and the uterus, this type of stem pessary was also known as an intrauterine device. The use of intrauterine devices was associated with high rates of infection; for this reason, they were condemned by the medical community.", "title": "History" }, { "paragraph_id": 37, "text": "The first intrauterine device (contained entirely in the uterus) was described in a German publication in 1909, although the author appears to have never marketed his product.", "title": "History" }, { "paragraph_id": 38, "text": "In 1929, Ernst Gräfenberg of Germany published a report on an IUD made of silk sutures. He had found a 3% pregnancy rate among 1,100 women using his ring. In 1930, Gräfenberg reported a lower pregnancy rate of 1.6% among 600 women using an improved ring wrapped in silver wire. Unbeknownst to Gräfenberg, the silver wire was contaminated with 26% copper. Copper's role in increasing IUD efficacy would not be recognized until nearly 40 years later.", "title": "History" }, { "paragraph_id": 39, "text": "In 1934, Japanese physician Tenrei Ota developed a variation of Gräfenberg's ring that contained a supportive structure in the center. The addition of this central disc lowered the IUD's expulsion rate. These devices still had high rates of infection, and their use and development were further stifled by World War II politics: contraception was forbidden in both Nazi Germany and Axis-allied Japan. The Allies did not learn of the work by Gräfenberg and Ota until well after the war ended.", "title": "History" }, { "paragraph_id": 40, "text": "The first plastic IUD, the Margulies Coil or Margulies Spiral, was introduced in 1958. This device was somewhat large, causing discomfort to a large proportion of women users, and had a hard plastic tail, causing discomfort to their male partners. The modern colloquialism \"coil\" is based on the coil-shaped design of early IUDs.", "title": "History" }, { "paragraph_id": 41, "text": "The Lippes Loop, a slightly smaller device with a monofilament tail, was introduced in 1962 and gained in popularity over the Margulies device.", "title": "History" }, { "paragraph_id": 42, "text": "The stainless steel single-ring IUD was developed in the 1970s and widely used in China because of low manufacturing costs. The Chinese government banned production of steel IUDs in 1993 due to high failure rates (up to 10% per year).", "title": "History" }, { "paragraph_id": 43, "text": "Howard Tatum, in the US, conceived the plastic T-shaped IUD in 1968. Shortly thereafter Jaime Zipper, in Chile, introduced the idea of adding copper to the devices to improve their contraceptive effectiveness. It was found that copper-containing devices could be made in smaller sizes without compromising effectiveness, resulting in fewer side effects such as pain and bleeding. T-shaped devices had lower rates of expulsion due to their greater similarity to the shape of the uterus.", "title": "History" }, { "paragraph_id": 44, "text": "The poorly designed Dalkon Shield plastic IUD (which had a multifilament tail) was manufactured by the A. H. Robins Company and sold by Robins in the United States for three and a half years from January 1971 through June 1974, before sales were suspended by Robins on June 28, 1974, at the request of the FDA because of safety concerns following reports of 110 septic spontaneous abortions in women with the Dalkon Shield in place, seven of whom had died. Robins stopped international sales of the Dalkon Shield in April 1975.", "title": "History" }, { "paragraph_id": 45, "text": "Tatum developed many different models of the copper IUD. He created the TCu220 C, which had copper collars as opposed to a copper filament, which prevented metal loss and increased the lifespan of the device. Second-generation copper-T IUDs were also introduced in the 1970s. These devices had higher surface areas of copper, and for the first time consistently achieved effectiveness rates of greater than 99%. The last model Tatum developed was the TCu380A, the model that is most recommended today.", "title": "History" }, { "paragraph_id": 46, "text": "The Paragard T-380A is an IUD with copper, manufactured and marketed in the United States by The Cooper Companies. It is the only copper-containing intrauterine device approved for use in the U.S. (three hormonal uterine devices, Mirena, Skyla and Liletta are also approved). The Paragard consists of a T-shaped polyethylene frame wound with copper wire, along with two monofilament threads to aid in the removal of the IUD.", "title": "Brands" }, { "paragraph_id": 47, "text": "The Paragard T 380A was developed in the 1970s by the Population Council and Finishing Enterprises Inc. (FEI). The Population Council's Paragard new drug application (NDA) was approved by the U.S. Food and Drug Administration (FDA) and FEI began manufacturing it for distribution outside the United States in 1984. GynoPharma (originally GynoMed) began marketing it in the U.S. in May 1988. On August 2, 1995, Ortho-McNeil acquired GynoPharma and began marketing Paragard in the U.S. On January 1, 2004, FEI Women's Health acquired the patent from the Population Council and U.S. marketing rights from Ortho-McNeil. On November 10, 2005, Duramed Pharmaceuticals, a subsidiary of Barr Pharmaceuticals, acquired FEI Women's Health and Paragard. On July 18, 2008, it was announced that Teva Pharmaceutical Industries Ltd. would acquire Barr Pharmaceuticals.", "title": "Brands" }, { "paragraph_id": 48, "text": "On November 1, 2017, The Cooper Companies acquired Paragard from Teva Pharmaceutical Industries for approximately $1.1 billion.", "title": "Brands" }, { "paragraph_id": 49, "text": "The original FDA approval of Paragard in 1984 was for 4 years continuous use, this was later extended to 6 years in 1989, then 8 years in 1991, then 10 years in 1994. (ATC code G02BA02 (WHO))", "title": "Brands" } ]
A copper intrauterine device (IUD), also known as an intrauterine coil or copper coil or non-hormonal IUD, is a type of intrauterine device which contains copper. It is used for birth control and emergency contraception within five days of unprotected sex. It is one of the most effective forms of birth control with a one-year failure rate around 0.7%. The device is placed in the uterus and lasts up to twelve years. It may be used by women of all ages regardless of whether or not they have had children. Following removal, fertility quickly returns. Side effects may be heavy menstrual periods, and/or rarely the device may come out. It is less recommended for people at high risk of sexually transmitted infections as it may increase the risk of pelvic inflammatory disease in the first three weeks after insertion. It is recommended for people who don't tolerate or hardly tolerate hormonal contraceptives. If a woman becomes pregnant with an IUD in place removal is recommended. Very rarely, uterine perforation may occur during insertion if not done properly. The copper IUD is a type of long-acting reversible birth control. It primarily works by killing the sperm. The copper IUD came into medical use in the 1970s. It is on the World Health Organization's List of Essential Medicines. They are used by more than 170 million women globally.
2001-12-14T13:45:52Z
2023-11-29T23:18:35Z
[ "Template:Cite journal", "Template:Blockquote", "Template:Birth control methods", "Template:About", "Template:ATC", "Template:Cn", "Template:Refimprove section", "Template:Cite book", "Template:Short description", "Template:Infobox birth control", "Template:Reflist", "Template:Cite web", "Template:Portal bar" ]
https://en.wikipedia.org/wiki/Copper_IUD
15,379
Isle Royale National Park
Isle Royale National Park is an American national park consisting of Isle Royale, along with more than 400 small adjacent islands and the surrounding waters of Lake Superior, in the state of Michigan. Isle Royale is 45 mi (72 km) long and 9 mi (14 km) wide, with an area of 206.73 sq mi (535.4 km), making it the fourth-largest lake island in the world. In addition, it is the largest natural island in Lake Superior, the second-largest island in the Great Lakes (after Manitoulin Island), the third-largest in the contiguous United States (after Long Island and Padre Island), and the 33rd-largest island in the United States. Isle Royale National Park was established on April 3, 1940, then additionally protected from development by wilderness area designation in 1976, declared a UNESCO International Biosphere Reserve in 1980, and added to the National Register of Historic Places in 2019 as the Minong Traditional Cultural Property. The park covers 894 sq mi (2,320 km), with 209 sq mi (540 km) of land and 685 sq mi (1,770 km) of surrounding waters. The park's northern boundary lies adjacent to the Canadian Lake Superior National Marine Conservation Area along the international border. With 25,894 visits in 2021, it is the seventh least-visited National Park in the United States. In 1875, Isle Royale was set off from Keweenaw County, as a separate county, "Isle Royale County". In 1897, the county was dissolved, and the island was reincorporated into Keweenaw County. The highest point on the island is Mount Desor at 1,394 ft (425 m), or about 800 ft (240 m) above lake level. Isle Royale, the largest island in Lake Superior, is over 45 mi (72 km) in length and 9 mi (14 km) wide at its widest point. The park is made up of Isle Royale itself and approximately 400 smaller islands, along with any submerged lands within 4.5 mi (7.2 km) of the surrounding islands (16USC408g). Isle Royale is within about 15 mi (24 km) from the shore of the Canadian province of Ontario and adjacently, the state of Minnesota (near the city of Thunder Bay), and is 56 mi (90 km) from the Michigan shore, on the Keweenaw Peninsula, itself part of Upper Peninsula. There are no roads on the island, and wheeled vehicles or devices, other than wheelchairs, are not permitted. Rock Harbor has wheeled carts available to move personal belongings from the Rock Harbor marina to the cabins and hotel. Also, the National Park Service employs tractors and utility terrain vehicles to move items around the developed areas at Ozaagaateng, Rock Harbor, and Mott Island. Topsoil tends to be thin, which favors trees that have horizontal root patterns such as balsam fir, white spruce, and black spruce. Siskiwit Lake is the largest lake on the island. It has cold, clear water which is relatively low in nutrients. Siskiwit Lake contains several islands, including Ryan Island, the largest. According to the Köppen climate classification system, Isle Royale National Park has a mild summer Humid continental climate (Dfb). According to the United States Department of Agriculture, the Plant Hardiness zone is 4b at 1178 ft (359 m) elevation with an average annual extreme minimum temperature of -24.2 °F (-31.2 °C). There is no weather station in the park, but the PRISM Climate Group, a project of Oregon State University, provides interpolated data for the island based on the climates of nearby areas. The island was a common hunting ground for native people from nearby Minnesota and Ontario. A canoe voyage of thirteen miles is necessary to reach the island's west end from the mainland. Large quantities of copper artifacts found in indian mounds and settlements, some dating back to 3000 B.C., were most likely mined on Isle Royale and the nearby Keweenaw Peninsula. The island has hundreds of pits and trenches up to 65 feet (20 m) deep from these indigenous peoples, with most in the McCargoe Cove area. Carbon-14 testing of wood remains found in sockets of copper artifacts indicates that they are at least 6500 years old. In Prehistoric Copper Mining in the Lake Superior Region, Drier and Du Temple estimated that over 750,000 tons of copper had been mined from the region. However, David Johnson and Susan Martin contend that their estimate was based on exaggerated and inaccurate assumptions. The Jesuit missionary Dablon published an account in 1669-70 of "an island called Menong, celebrated for its copper." Menong, or Minong, was the native term for the island, and is the basis for Minong Ridge. Prospecting began in earnest when the Chippewas relinquished their claims to the island in 1843, starting with many of the original native pits. This activity had ended by 1855, when no economic deposits were found. The Minong Mine and Island Mine were the result of renewed but short-lived activity from 1873 to 1881. Isle Royale was given to the United States by the 1783 treaty with Great Britain (formerly part of the Indian Reserve disputed by the United States) but the British remained in control until after the War of 1812, and the Ojibwa peoples considered the island to be their territory. The Ojibwas ceded the island to the U.S. in the 1842 Treaty of La Pointe, with the Grand Portage Band unaware that neither they nor Isle Royale were in British territory. With the clarification to the Ojibwas of the 1842 Webster–Ashburton Treaty that was signed before the Treaty of La Pointe, the Ojibwas re-affirmed the 1842 Treaty of La Pointe in the 1844 Isle Royale Agreement, with the Grand Portage Band signing the agreement as an addendum to the 1842 treaty. In the mid-1840s, a report by Douglass Houghton, Michigan's first state geologist, set off a copper boom in the state, and the first modern copper mines were opened on the island. Evidence of the earlier mining efforts was everywhere, in the form of many stone hammers, some copper artifacts, and places where copper had been partially worked out of the rock but left in place. The ancient pits and trenches led to the discovery of many of the copper deposits that were mined in the 19th century. The remoteness of the island, combined with the small veins of copper, caused most of the 19th-century mines to fail quickly. Between the miners and commercial loggers, much of the island was deforested during the late 19th century. Once the island became a national park in 1940, logging and other exploitive activities ended, and the forest began to regenerate. The island was once the site of several lake trout and whitefish fisheries, as well as a few resorts. The fishing industry has declined considerably, but continues at Edisen Fishery. Today, it has no permanent inhabitants; the small communities of Scandinavian fishermen were removed by the United States National Park Service after the island became a national park in the 1940s. About 12 families still have lifetime leases for their cabins and claim Isle Royale as their heritage, and several descendant fishermen fish the Isle Royale waters commercially. Because numerous small islands surround Isle Royale, ships were once guided through the area by lighthouses at Passage Island, Rock Harbor, Rock of Ages, and Isle Royale Lighthouse on Menagerie Island. The western tip of the island is home to several shipwrecks that are very popular with scuba divers, including the SS America. The NPS Submerged Resources Center mapped the 10 most famous of the shipwrecks contained within the park, and published Shipwrecks of Isle Royale National Park; The Archeological Survey, which gives an overview of the maritime history of the area. The area's notoriously harsh weather, dramatic underwater topography, the island's central location on historic shipping routes, and the cold, fresh water have resulted in largely intact, well preserved wrecks throughout the park. In January 2019, the entire island chain was added to the National Register of Historic Places by the federal government. On the Register it is called 'the Minong Traditional Cultural Property.' In 1845, an Ojibwe woman named Angelique and her voyageur husband Charlie Mott were left on Isle Royale, as hires for Cyrus Mendenhall and the Lake Superior Copper Company. They were hired and carried to Isle Royale by Mendenhall's schooner, the Algonquin, first to scout for copper. Angelique found a large mass of copper ore, upon which she and her husband were hired to stay and guard until a barge could come to retrieve it, promised in no more than 3 months' time. They were dropped off in July and were left stranded there until the following spring. They were left with minimal provisions, which consisted of a half-barrel of flour, six pounds of butter, and some beans. A supply boat was promised to arrive after the first few weeks, but it was never sent out. The full events were chronicled in a footnote as told by Angelique, in the first printing of a book called "The Honorable Peter White" by Ralph D. Williams in 1907; Angelique's story was pulled from the subsequent printing, thus making it the only written record that survives. Humans haven't normally settled year-round on Isle Royale. For about three thousand years, Native Americans used the land for copper and fish. These Native Americans usually limited their visits to the island in the summer. Americans in the nineteenth century did likewise. A number of habitats exist on the island, the primary being boreal forest, similar to neighboring Ontario and Minnesota. Upland areas along some of the ridges are effectively "balds" with exposed bedrock and a few scrubby trees, blueberry bushes, and hardy grasses. Occasional marshes exist, which are typically the by-product of beaver activities. There are also several lakes, often with wooded or marshy shores. The climate, especially in lowland areas, is heavily influenced by the cold waters of Lake Superior. According to the A. W. Kuchler U.S. Potential natural vegetation Types, Isle Royale National Park has a Great Lakes Spruce/Fir (93) potential vegetation type and a Northern Conifer Forest (22) potential vegetation form. The predominant floral habitats of Isle Royale are within the Laurentian Mixed Forest Province. The area is a temperate broadleaf and mixed forests biome transition zone between the true boreal forest to the north and Big Woods to the south, with characteristics of each. It has areas of both broadleaf and conifer forest cover, and bodies of water ranging from conifer bogs to swamps. Conifers include jack pines (Pinus banksiana), black and white spruces (Picea mariana and Picea glauca), balsam firs (Abies balsamea), and eastern redcedars (Juniperus virginiana). Deciduous trees include quaking aspens (Populus tremuloides), red oaks (Quercus rubra), paper birches (Betula papyrifera), American mountain ash (Sorbus americana), red maples (Acer rubrum), sugar maples (Acer saccharum), and mountain maples (Acer spicatum). There are over 600 species of flowering plants found in Isle Royale National Park such as wild sarsaparilla, marsh-marigold, wood lily and prickly wild rose. Isle Royale National Park is known for its timber wolf and moose populations which are studied by scientists investigating predator-prey relationships in a closed environment. There is a cyclical relationship between the two animals: as the moose increase in population, so do the wolves. Eventually, the wolves kill too many moose and begin to starve and lower their reproductive rates. This is made easier because Isle Royale has been colonized by roughly just one third of the mainland mammal species, because it is so remote. In addition, the environment is unique in that it is the only known place where wolves and moose coexist without the presence of bears. Other common mammals are red foxes, beavers, and red squirrels. Some foxes are accustomed to human contact and can be seen prowling the campgrounds at dawn, looking for stray scraps left by unwary campers. For its part, the wolf is an elusive species which avoids human interaction. Few documented cases of direct wolf/human contact exist. Ermine have been periodically sighted around docks. Other mammals that can be seen include mink along the various lake shores and muskrats (occasionally) at beaver ponds. Several species of bat also exist on the island. Reptiles include the eastern garter snake, painted turtle, and northern redbelly snake. Six species of frogs and three species of salamander also live on the island. Historically neither moose nor wolves inhabited Isle Royale. Just prior to becoming a national park the large mammals on Isle Royale were Canada lynx and the boreal woodland caribou. Archeological evidence indicates both of these species were present on Isle Royale for 3,500 years prior to being removed by direct human actions (hunting, trapping, mining, logging, fires, competition for resources from exotic species and possibly disease due to the introduction of invasive species). The last caribou documented on Isle Royale was in 1925. Though lynx were removed by the 1930s, some have periodically crossed the ice bridge from neighboring Ontario, Canada, the most recent being an individual sighting in 1980. Although lynx are no longer present on the island, their primary prey, snowshoe hares, remain. Before the appearance of wolves, coyotes were also predators on the island. Coyotes appeared around 1905 and disappeared shortly after wolves arrived in the 1950s. Four wolves were brought from Minnesota in 2018 after some debate as to whether or not the introduction was an unnatural intervention. Moose are believed to have colonized Isle Royale sometime between 1905 and 1912. It was initially believed that a small herd of moose (moose typically do not travel in herds) colonized the islands by crossing the ice from the adjacent mainland; later this theory was modified to a herd of moose swimming 20 miles across Lake Superior from the nearest mainland. The improbability of these theories received little scrutiny until recent years. Although no thorough scientific investigation to determine how moose arrived on Isle Royale has been carried out to date, both cultural and genetic evidence indicates they were likely introduced by humans to create a private hunting preserve in the early 1900s. The cultural evidence that moose were trapped in northwestern Minnesota and transported to Isle Royale seemed unlikely until decades later when genetic evidence revealed the moose on Isle Royale were more closely related to moose in the far northwestern Minnesota/Manitoba border area than the mainland adjacent to Isle Royale in far northeastern Minnesota bordering Ontario. Further evidence has also shown that the Washington Harbor Club, a group of well-to-do businessmen, owned various buildings on Isle Royale in addition to railroads that ran from Baudette to Duluth and Two Harbors and so had the means to transport moose from northwestern Minnesota to Two Harbors. There are usually around 25 wolves and 1000 moose on the island, but the numbers change greatly year to year. In the 2006-2007 winter, a survey found 385 moose and 21 wolves in 3 packs. In spring 2008, 23 wolves and approximately 650 moose were counted. However, recent reductions in winter pack ice had ended replenishment of the wolf population from the mainland. Due to genetic inbreeding, the wolf population had declined to two individuals in 2016, causing researchers to expect that the island's wolf population would eventually become extinct. At the same time, the island's moose population had exploded to an estimated 1600. By November 2017, the wolf population was down to one, a female. In December 2016, the National Park Service put forward an initial plan to bring additional wolves to the island in order to prevent the pack from disappearing completely. The decision to relocate 20-30 wolves to the island was approved and from September 2018 to September 2019, 19 wolves were relocated to Isle Royale from various locations in Minnesota, Michigan, and Ontario. As of April 14, 2020, there were an estimated 14 wolves remaining on the island. The island is composed largely of ridges, running roughly southwest-to-northeast. The main ridge, Greenstone Ridge, is over 1,000 ft (300 m) in many places. Greenstone belts are exposed, with rounded stones of chlorastrolite, also known as greenstone, near and in the lake. The two main rock assemblages found on the island include the Portage Lake Volcanics and the Copper Harbor Conglomerate, both Precambrian in age. The volcanics are mainly ophitic flood basalts, some 100 individual flows over an accumulated thickness of at least 10,000 feet. The conglomerate outcrops on the southwestern portion of the island and consists of sedimentary rock derived from volcanic rocks in present-day Minnesota. Glacial erosion accentuated the ridge and valley topography from pre-glacial stream erosion. Glacial striations indicate a generally westward movement of the glaciers as do the recessional moraines west of Lake Desor. Drumlins are found west of Siskiwit Lake. Recent analyses by the USGS of both unmineralized basalt and copper-mineralized rock show that a small amount of naturally occurring mercury is associated with mineralization. Native copper and chlorastrolite, the official state gem of Michigan, are secondary minerals filling pore spaces formed by vesicles and fractures within the volcanic rocks. Prehnite and agate amygdules are also plentiful island gemstones. Recreational activity on Isle Royale includes hiking, backpacking, fishing, boating, canoeing, kayaking, and observing nature. Wheeled vehicles are not permitted on Isle Royale; however, wheelchairs are allowed. The island offers approximately 170 mi (270 km) of hiking trails for everything from day hikes to a two-week circumnavigation hike. Some of the hiking trails are quite challenging, with steep grades. The Greenstone Ridge is a high ridge in the center of the island and carries the longest trail in the park, the Greenstone Ridge Trail, which runs 40 mi (64 km) from one end of the island to the other. This is generally done as a 4 or 5 day hike. A boat shuttle can carry hikers back to their starting point. The trail leads to the peak of Mount Desor, at 1,394 ft (425 m), the highest point on the island, and passes through northwoods wilderness, and by inland glacial lakes, swamps, bogs and scenic shorelines. In total there are 165 mi (266 km) of hiking trails. There are also canoe/kayak routes, many involving portages, along coastal bays and inland lakes. The park has two developed areas: Ozaagaateng (formerly Windigo), at the southwest end of the island (docking site for the ferries from Minnesota), with a campstore, showers, campsites, rustic camper cabins, and a boat dock. Rock Harbor on the south side of the northeast end (docking site for the ferries from Michigan), with a campstore, showers, restaurant, lodge, campsites, and a boat dock. Non-camping sleeping accommodations at the park are limited to the lodge at Rock Harbor and the camper cabins at Ozaagaateng. The park has 36 designated wilderness campgrounds. Some campgrounds in the interior are accessible only by trail or by canoe/kayak on the island lakes. Other campgrounds are accessible only by private boat. The campsites vary in capacity but typically include a few three-sided wood shelters (the fourth wall is screened) with floors and roofs, and several individual sites suitable for pitching a small tent. Some tent sites with space for groups of up to 10 are available, and are used for overflow if all the individual sites are filled. The only amenities at the campgrounds are pit toilets, picnic tables, and fire-rings at specific areas. Campfires are not permitted at most campgrounds; gas or alcohol camp stoves are recommended. Drinking and cooking water must be drawn from local water sources (Lake Superior and inland lakes) and filtered, treated, or boiled to avoid parasites. Hunting is not permitted, but fishing is, and edible berries (blueberries, thimbleberries) may be picked from the trail. The park is accessible by ferries, floatplanes, and passenger ships during the summer months—from Houghton and Copper Harbor in Michigan and Grand Portage in Minnesota. Private boats travel to the island from the coasts of Michigan, Minnesota, and Ontario. Isle Royale is quite popular with day-trippers in private boats, and day-trip ferry service is provided from Copper Harbor and Grand Portage to and from the park. Isle Royale is the only American national park to entirely close in the winter months, from November 1 through April 15, due to extreme weather conditions and for the safety and protection of visitors. Isle Royale is the least-visited national park in the contiguous United States, due to the winter closing and the distance across Lake Superior to reach the park. The average annual visitation was about 19,000 in the period from 2009 to 2018, with 25,798 visiting in 2018. Only three of the most remote Alaskan national parks—Lake Clark, Kobuk Valley and Gates of the Arctic—receive fewer visitors. Scheduled ferry service operates from Grand Portage, Copper Harbor and Houghton. The Grand Portage ferries reach the island in 1½ hours, and stay 4 hours at the island, allowing time for hiking, a guided hike or program by the park staff, and picnics. The Isle Royale Queen serves park visitors out of Copper Harbor, on the northern Upper Peninsula coast of Michigan. It arrives at Rock Harbor in the park in 3 to 3½ hours and spends 3½ hours before returning to Copper Harbor. The Sea Hunter operates round-trips and offers day trips to the Ozaagaateng visitor center through much of the season, and less frequently in early summer and autumn; it will transport kayaks and canoes for visitors wanting to explore the park from the water. It is the fastest ferry serving the island and arrives in 1½ hours, including some sightseeing points along the way out and back. Because of the relatively short boat ride, day visitors are able to get four hours on the island, and get back to the mainland earlier in the afternoon. This gives visitors on a tight schedule time to visit the Grand Portage National Monument or other attractions in the same day. The Ranger III is a 165 ft (50 m) ship that serves park visitors from Houghton, Michigan to Rock Harbor. It is operated by the National Park Service, and is said to be the largest piece of equipment in the National Park system. It carries 125 passengers, along with canoes, kayaks, and even small powerboats. It is a six-hour voyage from Houghton to the park. The ship stays overnight at Rock Harbor before returning the next day, making two round trips each week from June to mid-September. Briefly in the 2008 season, the Ranger III carried visitors to and from Ozaagaateng. This was not continued after four trips, due to low interest and long crossing times. In 2012, Park Superintendent Phyllis Green required the Ranger III to purify its ballast water. The Voyageur II, out of Grand Portage, crosses up to three times a week, overnighting at Rock Harbor and providing transportation between popular lakeside campgrounds. In the fall season, in addition to carrying campers and hikers, it provides day-trip service to Ozaagaateng on weekends. The Voyageur transports kayaks and canoes for visitors wanting to explore the island from the water. The Voyageur II and other boat taxi services ferry hikers to points along the island, allowing a one-way hike back to Rock Harbor or Ozaagaateng. Visitors may land at Rock Harbor and depart from Ozaagaateng several days later, or vice versa. Hikers frequently ride it in one direction to do a cross-island hike and then get picked up at the other end.
[ { "paragraph_id": 0, "text": "Isle Royale National Park is an American national park consisting of Isle Royale, along with more than 400 small adjacent islands and the surrounding waters of Lake Superior, in the state of Michigan.", "title": "" }, { "paragraph_id": 1, "text": "Isle Royale is 45 mi (72 km) long and 9 mi (14 km) wide, with an area of 206.73 sq mi (535.4 km), making it the fourth-largest lake island in the world. In addition, it is the largest natural island in Lake Superior, the second-largest island in the Great Lakes (after Manitoulin Island), the third-largest in the contiguous United States (after Long Island and Padre Island), and the 33rd-largest island in the United States.", "title": "" }, { "paragraph_id": 2, "text": "Isle Royale National Park was established on April 3, 1940, then additionally protected from development by wilderness area designation in 1976, declared a UNESCO International Biosphere Reserve in 1980, and added to the National Register of Historic Places in 2019 as the Minong Traditional Cultural Property. The park covers 894 sq mi (2,320 km), with 209 sq mi (540 km) of land and 685 sq mi (1,770 km) of surrounding waters.", "title": "" }, { "paragraph_id": 3, "text": "The park's northern boundary lies adjacent to the Canadian Lake Superior National Marine Conservation Area along the international border. With 25,894 visits in 2021, it is the seventh least-visited National Park in the United States.", "title": "" }, { "paragraph_id": 4, "text": "In 1875, Isle Royale was set off from Keweenaw County, as a separate county, \"Isle Royale County\". In 1897, the county was dissolved, and the island was reincorporated into Keweenaw County. The highest point on the island is Mount Desor at 1,394 ft (425 m), or about 800 ft (240 m) above lake level.", "title": "Geography" }, { "paragraph_id": 5, "text": "Isle Royale, the largest island in Lake Superior, is over 45 mi (72 km) in length and 9 mi (14 km) wide at its widest point. The park is made up of Isle Royale itself and approximately 400 smaller islands, along with any submerged lands within 4.5 mi (7.2 km) of the surrounding islands (16USC408g).", "title": "Geography" }, { "paragraph_id": 6, "text": "Isle Royale is within about 15 mi (24 km) from the shore of the Canadian province of Ontario and adjacently, the state of Minnesota (near the city of Thunder Bay), and is 56 mi (90 km) from the Michigan shore, on the Keweenaw Peninsula, itself part of Upper Peninsula. There are no roads on the island, and wheeled vehicles or devices, other than wheelchairs, are not permitted. Rock Harbor has wheeled carts available to move personal belongings from the Rock Harbor marina to the cabins and hotel. Also, the National Park Service employs tractors and utility terrain vehicles to move items around the developed areas at Ozaagaateng, Rock Harbor, and Mott Island.", "title": "Geography" }, { "paragraph_id": 7, "text": "Topsoil tends to be thin, which favors trees that have horizontal root patterns such as balsam fir, white spruce, and black spruce.", "title": "Geography" }, { "paragraph_id": 8, "text": "Siskiwit Lake is the largest lake on the island. It has cold, clear water which is relatively low in nutrients. Siskiwit Lake contains several islands, including Ryan Island, the largest.", "title": "Geography" }, { "paragraph_id": 9, "text": "According to the Köppen climate classification system, Isle Royale National Park has a mild summer Humid continental climate (Dfb). According to the United States Department of Agriculture, the Plant Hardiness zone is 4b at 1178 ft (359 m) elevation with an average annual extreme minimum temperature of -24.2 °F (-31.2 °C).", "title": "Climate" }, { "paragraph_id": 10, "text": "There is no weather station in the park, but the PRISM Climate Group, a project of Oregon State University, provides interpolated data for the island based on the climates of nearby areas.", "title": "Climate" }, { "paragraph_id": 11, "text": "The island was a common hunting ground for native people from nearby Minnesota and Ontario. A canoe voyage of thirteen miles is necessary to reach the island's west end from the mainland.", "title": "History" }, { "paragraph_id": 12, "text": "Large quantities of copper artifacts found in indian mounds and settlements, some dating back to 3000 B.C., were most likely mined on Isle Royale and the nearby Keweenaw Peninsula. The island has hundreds of pits and trenches up to 65 feet (20 m) deep from these indigenous peoples, with most in the McCargoe Cove area. Carbon-14 testing of wood remains found in sockets of copper artifacts indicates that they are at least 6500 years old.", "title": "History" }, { "paragraph_id": 13, "text": "In Prehistoric Copper Mining in the Lake Superior Region, Drier and Du Temple estimated that over 750,000 tons of copper had been mined from the region. However, David Johnson and Susan Martin contend that their estimate was based on exaggerated and inaccurate assumptions. The Jesuit missionary Dablon published an account in 1669-70 of \"an island called Menong, celebrated for its copper.\" Menong, or Minong, was the native term for the island, and is the basis for Minong Ridge. Prospecting began in earnest when the Chippewas relinquished their claims to the island in 1843, starting with many of the original native pits. This activity had ended by 1855, when no economic deposits were found. The Minong Mine and Island Mine were the result of renewed but short-lived activity from 1873 to 1881.", "title": "History" }, { "paragraph_id": 14, "text": "Isle Royale was given to the United States by the 1783 treaty with Great Britain (formerly part of the Indian Reserve disputed by the United States) but the British remained in control until after the War of 1812, and the Ojibwa peoples considered the island to be their territory. The Ojibwas ceded the island to the U.S. in the 1842 Treaty of La Pointe, with the Grand Portage Band unaware that neither they nor Isle Royale were in British territory. With the clarification to the Ojibwas of the 1842 Webster–Ashburton Treaty that was signed before the Treaty of La Pointe, the Ojibwas re-affirmed the 1842 Treaty of La Pointe in the 1844 Isle Royale Agreement, with the Grand Portage Band signing the agreement as an addendum to the 1842 treaty.", "title": "History" }, { "paragraph_id": 15, "text": "In the mid-1840s, a report by Douglass Houghton, Michigan's first state geologist, set off a copper boom in the state, and the first modern copper mines were opened on the island. Evidence of the earlier mining efforts was everywhere, in the form of many stone hammers, some copper artifacts, and places where copper had been partially worked out of the rock but left in place. The ancient pits and trenches led to the discovery of many of the copper deposits that were mined in the 19th century. The remoteness of the island, combined with the small veins of copper, caused most of the 19th-century mines to fail quickly. Between the miners and commercial loggers, much of the island was deforested during the late 19th century. Once the island became a national park in 1940, logging and other exploitive activities ended, and the forest began to regenerate.", "title": "History" }, { "paragraph_id": 16, "text": "The island was once the site of several lake trout and whitefish fisheries, as well as a few resorts. The fishing industry has declined considerably, but continues at Edisen Fishery. Today, it has no permanent inhabitants; the small communities of Scandinavian fishermen were removed by the United States National Park Service after the island became a national park in the 1940s. About 12 families still have lifetime leases for their cabins and claim Isle Royale as their heritage, and several descendant fishermen fish the Isle Royale waters commercially.", "title": "History" }, { "paragraph_id": 17, "text": "Because numerous small islands surround Isle Royale, ships were once guided through the area by lighthouses at Passage Island, Rock Harbor, Rock of Ages, and Isle Royale Lighthouse on Menagerie Island. The western tip of the island is home to several shipwrecks that are very popular with scuba divers, including the SS America. The NPS Submerged Resources Center mapped the 10 most famous of the shipwrecks contained within the park, and published Shipwrecks of Isle Royale National Park; The Archeological Survey, which gives an overview of the maritime history of the area. The area's notoriously harsh weather, dramatic underwater topography, the island's central location on historic shipping routes, and the cold, fresh water have resulted in largely intact, well preserved wrecks throughout the park.", "title": "History" }, { "paragraph_id": 18, "text": "In January 2019, the entire island chain was added to the National Register of Historic Places by the federal government. On the Register it is called 'the Minong Traditional Cultural Property.'", "title": "History" }, { "paragraph_id": 19, "text": "In 1845, an Ojibwe woman named Angelique and her voyageur husband Charlie Mott were left on Isle Royale, as hires for Cyrus Mendenhall and the Lake Superior Copper Company. They were hired and carried to Isle Royale by Mendenhall's schooner, the Algonquin, first to scout for copper. Angelique found a large mass of copper ore, upon which she and her husband were hired to stay and guard until a barge could come to retrieve it, promised in no more than 3 months' time. They were dropped off in July and were left stranded there until the following spring. They were left with minimal provisions, which consisted of a half-barrel of flour, six pounds of butter, and some beans. A supply boat was promised to arrive after the first few weeks, but it was never sent out.", "title": "History" }, { "paragraph_id": 20, "text": "The full events were chronicled in a footnote as told by Angelique, in the first printing of a book called \"The Honorable Peter White\" by Ralph D. Williams in 1907; Angelique's story was pulled from the subsequent printing, thus making it the only written record that survives. Humans haven't normally settled year-round on Isle Royale. For about three thousand years, Native Americans used the land for copper and fish. These Native Americans usually limited their visits to the island in the summer. Americans in the nineteenth century did likewise.", "title": "History" }, { "paragraph_id": 21, "text": "A number of habitats exist on the island, the primary being boreal forest, similar to neighboring Ontario and Minnesota. Upland areas along some of the ridges are effectively \"balds\" with exposed bedrock and a few scrubby trees, blueberry bushes, and hardy grasses. Occasional marshes exist, which are typically the by-product of beaver activities. There are also several lakes, often with wooded or marshy shores. The climate, especially in lowland areas, is heavily influenced by the cold waters of Lake Superior.", "title": "Natural history" }, { "paragraph_id": 22, "text": "According to the A. W. Kuchler U.S. Potential natural vegetation Types, Isle Royale National Park has a Great Lakes Spruce/Fir (93) potential vegetation type and a Northern Conifer Forest (22) potential vegetation form.", "title": "Natural history" }, { "paragraph_id": 23, "text": "The predominant floral habitats of Isle Royale are within the Laurentian Mixed Forest Province. The area is a temperate broadleaf and mixed forests biome transition zone between the true boreal forest to the north and Big Woods to the south, with characteristics of each. It has areas of both broadleaf and conifer forest cover, and bodies of water ranging from conifer bogs to swamps.", "title": "Natural history" }, { "paragraph_id": 24, "text": "Conifers include jack pines (Pinus banksiana), black and white spruces (Picea mariana and Picea glauca), balsam firs (Abies balsamea), and eastern redcedars (Juniperus virginiana).", "title": "Natural history" }, { "paragraph_id": 25, "text": "Deciduous trees include quaking aspens (Populus tremuloides), red oaks (Quercus rubra), paper birches (Betula papyrifera), American mountain ash (Sorbus americana), red maples (Acer rubrum), sugar maples (Acer saccharum), and mountain maples (Acer spicatum). There are over 600 species of flowering plants found in Isle Royale National Park such as wild sarsaparilla, marsh-marigold, wood lily and prickly wild rose.", "title": "Natural history" }, { "paragraph_id": 26, "text": "Isle Royale National Park is known for its timber wolf and moose populations which are studied by scientists investigating predator-prey relationships in a closed environment. There is a cyclical relationship between the two animals: as the moose increase in population, so do the wolves. Eventually, the wolves kill too many moose and begin to starve and lower their reproductive rates. This is made easier because Isle Royale has been colonized by roughly just one third of the mainland mammal species, because it is so remote. In addition, the environment is unique in that it is the only known place where wolves and moose coexist without the presence of bears.", "title": "Natural history" }, { "paragraph_id": 27, "text": "Other common mammals are red foxes, beavers, and red squirrels. Some foxes are accustomed to human contact and can be seen prowling the campgrounds at dawn, looking for stray scraps left by unwary campers. For its part, the wolf is an elusive species which avoids human interaction. Few documented cases of direct wolf/human contact exist. Ermine have been periodically sighted around docks. Other mammals that can be seen include mink along the various lake shores and muskrats (occasionally) at beaver ponds. Several species of bat also exist on the island. Reptiles include the eastern garter snake, painted turtle, and northern redbelly snake. Six species of frogs and three species of salamander also live on the island.", "title": "Natural history" }, { "paragraph_id": 28, "text": "Historically neither moose nor wolves inhabited Isle Royale. Just prior to becoming a national park the large mammals on Isle Royale were Canada lynx and the boreal woodland caribou. Archeological evidence indicates both of these species were present on Isle Royale for 3,500 years prior to being removed by direct human actions (hunting, trapping, mining, logging, fires, competition for resources from exotic species and possibly disease due to the introduction of invasive species). The last caribou documented on Isle Royale was in 1925. Though lynx were removed by the 1930s, some have periodically crossed the ice bridge from neighboring Ontario, Canada, the most recent being an individual sighting in 1980. Although lynx are no longer present on the island, their primary prey, snowshoe hares, remain. Before the appearance of wolves, coyotes were also predators on the island. Coyotes appeared around 1905 and disappeared shortly after wolves arrived in the 1950s. Four wolves were brought from Minnesota in 2018 after some debate as to whether or not the introduction was an unnatural intervention.", "title": "Natural history" }, { "paragraph_id": 29, "text": "Moose are believed to have colonized Isle Royale sometime between 1905 and 1912. It was initially believed that a small herd of moose (moose typically do not travel in herds) colonized the islands by crossing the ice from the adjacent mainland; later this theory was modified to a herd of moose swimming 20 miles across Lake Superior from the nearest mainland. The improbability of these theories received little scrutiny until recent years. Although no thorough scientific investigation to determine how moose arrived on Isle Royale has been carried out to date, both cultural and genetic evidence indicates they were likely introduced by humans to create a private hunting preserve in the early 1900s. The cultural evidence that moose were trapped in northwestern Minnesota and transported to Isle Royale seemed unlikely until decades later when genetic evidence revealed the moose on Isle Royale were more closely related to moose in the far northwestern Minnesota/Manitoba border area than the mainland adjacent to Isle Royale in far northeastern Minnesota bordering Ontario. Further evidence has also shown that the Washington Harbor Club, a group of well-to-do businessmen, owned various buildings on Isle Royale in addition to railroads that ran from Baudette to Duluth and Two Harbors and so had the means to transport moose from northwestern Minnesota to Two Harbors.", "title": "Natural history" }, { "paragraph_id": 30, "text": "There are usually around 25 wolves and 1000 moose on the island, but the numbers change greatly year to year. In the 2006-2007 winter, a survey found 385 moose and 21 wolves in 3 packs. In spring 2008, 23 wolves and approximately 650 moose were counted. However, recent reductions in winter pack ice had ended replenishment of the wolf population from the mainland. Due to genetic inbreeding, the wolf population had declined to two individuals in 2016, causing researchers to expect that the island's wolf population would eventually become extinct. At the same time, the island's moose population had exploded to an estimated 1600. By November 2017, the wolf population was down to one, a female.", "title": "Natural history" }, { "paragraph_id": 31, "text": "In December 2016, the National Park Service put forward an initial plan to bring additional wolves to the island in order to prevent the pack from disappearing completely. The decision to relocate 20-30 wolves to the island was approved and from September 2018 to September 2019, 19 wolves were relocated to Isle Royale from various locations in Minnesota, Michigan, and Ontario. As of April 14, 2020, there were an estimated 14 wolves remaining on the island.", "title": "Natural history" }, { "paragraph_id": 32, "text": "The island is composed largely of ridges, running roughly southwest-to-northeast. The main ridge, Greenstone Ridge, is over 1,000 ft (300 m) in many places. Greenstone belts are exposed, with rounded stones of chlorastrolite, also known as greenstone, near and in the lake.", "title": "Natural history" }, { "paragraph_id": 33, "text": "The two main rock assemblages found on the island include the Portage Lake Volcanics and the Copper Harbor Conglomerate, both Precambrian in age. The volcanics are mainly ophitic flood basalts, some 100 individual flows over an accumulated thickness of at least 10,000 feet. The conglomerate outcrops on the southwestern portion of the island and consists of sedimentary rock derived from volcanic rocks in present-day Minnesota. Glacial erosion accentuated the ridge and valley topography from pre-glacial stream erosion. Glacial striations indicate a generally westward movement of the glaciers as do the recessional moraines west of Lake Desor. Drumlins are found west of Siskiwit Lake.", "title": "Natural history" }, { "paragraph_id": 34, "text": "Recent analyses by the USGS of both unmineralized basalt and copper-mineralized rock show that a small amount of naturally occurring mercury is associated with mineralization.", "title": "Natural history" }, { "paragraph_id": 35, "text": "Native copper and chlorastrolite, the official state gem of Michigan, are secondary minerals filling pore spaces formed by vesicles and fractures within the volcanic rocks. Prehnite and agate amygdules are also plentiful island gemstones.", "title": "Natural history" }, { "paragraph_id": 36, "text": "Recreational activity on Isle Royale includes hiking, backpacking, fishing, boating, canoeing, kayaking, and observing nature. Wheeled vehicles are not permitted on Isle Royale; however, wheelchairs are allowed.", "title": "Recreation" }, { "paragraph_id": 37, "text": "The island offers approximately 170 mi (270 km) of hiking trails for everything from day hikes to a two-week circumnavigation hike. Some of the hiking trails are quite challenging, with steep grades. The Greenstone Ridge is a high ridge in the center of the island and carries the longest trail in the park, the Greenstone Ridge Trail, which runs 40 mi (64 km) from one end of the island to the other. This is generally done as a 4 or 5 day hike. A boat shuttle can carry hikers back to their starting point. The trail leads to the peak of Mount Desor, at 1,394 ft (425 m), the highest point on the island, and passes through northwoods wilderness, and by inland glacial lakes, swamps, bogs and scenic shorelines.", "title": "Recreation" }, { "paragraph_id": 38, "text": "In total there are 165 mi (266 km) of hiking trails. There are also canoe/kayak routes, many involving portages, along coastal bays and inland lakes.", "title": "Recreation" }, { "paragraph_id": 39, "text": "The park has two developed areas:", "title": "Recreation" }, { "paragraph_id": 40, "text": "Ozaagaateng (formerly Windigo), at the southwest end of the island (docking site for the ferries from Minnesota), with a campstore, showers, campsites, rustic camper cabins, and a boat dock.", "title": "Recreation" }, { "paragraph_id": 41, "text": "Rock Harbor on the south side of the northeast end (docking site for the ferries from Michigan), with a campstore, showers, restaurant, lodge, campsites, and a boat dock. Non-camping sleeping accommodations at the park are limited to the lodge at Rock Harbor and the camper cabins at Ozaagaateng.", "title": "Recreation" }, { "paragraph_id": 42, "text": "The park has 36 designated wilderness campgrounds. Some campgrounds in the interior are accessible only by trail or by canoe/kayak on the island lakes. Other campgrounds are accessible only by private boat. The campsites vary in capacity but typically include a few three-sided wood shelters (the fourth wall is screened) with floors and roofs, and several individual sites suitable for pitching a small tent. Some tent sites with space for groups of up to 10 are available, and are used for overflow if all the individual sites are filled.", "title": "Recreation" }, { "paragraph_id": 43, "text": "The only amenities at the campgrounds are pit toilets, picnic tables, and fire-rings at specific areas. Campfires are not permitted at most campgrounds; gas or alcohol camp stoves are recommended. Drinking and cooking water must be drawn from local water sources (Lake Superior and inland lakes) and filtered, treated, or boiled to avoid parasites. Hunting is not permitted, but fishing is, and edible berries (blueberries, thimbleberries) may be picked from the trail.", "title": "Recreation" }, { "paragraph_id": 44, "text": "The park is accessible by ferries, floatplanes, and passenger ships during the summer months—from Houghton and Copper Harbor in Michigan and Grand Portage in Minnesota. Private boats travel to the island from the coasts of Michigan, Minnesota, and Ontario. Isle Royale is quite popular with day-trippers in private boats, and day-trip ferry service is provided from Copper Harbor and Grand Portage to and from the park.", "title": "Recreation" }, { "paragraph_id": 45, "text": "Isle Royale is the only American national park to entirely close in the winter months, from November 1 through April 15, due to extreme weather conditions and for the safety and protection of visitors. Isle Royale is the least-visited national park in the contiguous United States, due to the winter closing and the distance across Lake Superior to reach the park. The average annual visitation was about 19,000 in the period from 2009 to 2018, with 25,798 visiting in 2018. Only three of the most remote Alaskan national parks—Lake Clark, Kobuk Valley and Gates of the Arctic—receive fewer visitors.", "title": "Recreation" }, { "paragraph_id": 46, "text": "Scheduled ferry service operates from Grand Portage, Copper Harbor and Houghton.", "title": "Recreation" }, { "paragraph_id": 47, "text": "The Grand Portage ferries reach the island in 1½ hours, and stay 4 hours at the island, allowing time for hiking, a guided hike or program by the park staff, and picnics.", "title": "Recreation" }, { "paragraph_id": 48, "text": "The Isle Royale Queen serves park visitors out of Copper Harbor, on the northern Upper Peninsula coast of Michigan. It arrives at Rock Harbor in the park in 3 to 3½ hours and spends 3½ hours before returning to Copper Harbor.", "title": "Recreation" }, { "paragraph_id": 49, "text": "The Sea Hunter operates round-trips and offers day trips to the Ozaagaateng visitor center through much of the season, and less frequently in early summer and autumn; it will transport kayaks and canoes for visitors wanting to explore the park from the water. It is the fastest ferry serving the island and arrives in 1½ hours, including some sightseeing points along the way out and back. Because of the relatively short boat ride, day visitors are able to get four hours on the island, and get back to the mainland earlier in the afternoon. This gives visitors on a tight schedule time to visit the Grand Portage National Monument or other attractions in the same day.", "title": "Recreation" }, { "paragraph_id": 50, "text": "The Ranger III is a 165 ft (50 m) ship that serves park visitors from Houghton, Michigan to Rock Harbor. It is operated by the National Park Service, and is said to be the largest piece of equipment in the National Park system. It carries 125 passengers, along with canoes, kayaks, and even small powerboats. It is a six-hour voyage from Houghton to the park. The ship stays overnight at Rock Harbor before returning the next day, making two round trips each week from June to mid-September. Briefly in the 2008 season, the Ranger III carried visitors to and from Ozaagaateng. This was not continued after four trips, due to low interest and long crossing times. In 2012, Park Superintendent Phyllis Green required the Ranger III to purify its ballast water.", "title": "Recreation" }, { "paragraph_id": 51, "text": "The Voyageur II, out of Grand Portage, crosses up to three times a week, overnighting at Rock Harbor and providing transportation between popular lakeside campgrounds. In the fall season, in addition to carrying campers and hikers, it provides day-trip service to Ozaagaateng on weekends. The Voyageur transports kayaks and canoes for visitors wanting to explore the island from the water. The Voyageur II and other boat taxi services ferry hikers to points along the island, allowing a one-way hike back to Rock Harbor or Ozaagaateng. Visitors may land at Rock Harbor and depart from Ozaagaateng several days later, or vice versa. Hikers frequently ride it in one direction to do a cross-island hike and then get picked up at the other end.", "title": "Recreation" } ]
Isle Royale National Park is an American national park consisting of Isle Royale, along with more than 400 small adjacent islands and the surrounding waters of Lake Superior, in the state of Michigan. Isle Royale is 45 mi (72 km) long and 9 mi (14 km) wide, with an area of 206.73 sq mi (535.4 km2), making it the fourth-largest lake island in the world. In addition, it is the largest natural island in Lake Superior, the second-largest island in the Great Lakes, the third-largest in the contiguous United States, and the 33rd-largest island in the United States. Isle Royale National Park was established on April 3, 1940, then additionally protected from development by wilderness area designation in 1976, declared a UNESCO International Biosphere Reserve in 1980, and added to the National Register of Historic Places in 2019 as the Minong Traditional Cultural Property. The park covers 894 sq mi (2,320 km2), with 209 sq mi (540 km2) of land and 685 sq mi (1,770 km2) of surrounding waters. The park's northern boundary lies adjacent to the Canadian Lake Superior National Marine Conservation Area along the international border. With 25,894 visits in 2021, it is the seventh least-visited National Park in the United States.
2001-12-14T14:05:32Z
2023-12-28T02:21:47Z
[ "Template:Further", "Template:Webarchive", "Template:Commons category", "Template:Keweenaw County, Michigan", "Template:Wikivoyage", "Template:Official website", "Template:Protected areas of Michigan", "Template:Infobox NRHP", "Template:Weather box", "Template:Reflist", "Template:Cite magazine", "Template:Authority control", "Template:Infobox protected area", "Template:See also", "Template:Cite book", "Template:Cite news", "Template:Mdash", "Template:Cite web", "Template:Cite journal", "Template:Greatlakes", "Template:Short description", "Template:Use mdy dates", "Template:Cvt", "Template:Rp", "Template:National parks of the United States" ]
https://en.wikipedia.org/wiki/Isle_Royale_National_Park
15,381
NATO Integrated Air Defense System
The NATO Integrated Air Defense System (short: NATINADS) is a command and control network combining radars and other facilities spread throughout the NATO alliance's air defence forces. It formed in the mid-1950s and became operational in 1962 as NADGE. It has been constantly upgraded since its formation, notably with the integration of Airborne Early Warning aircraft in the 1970s. The United Kingdom maintained its own network, but was fully integrated with the network since the introduction of the Linesman/Mediator network in the 1970s. Similarly, the German network maintained an independent nature through GEADGE. Development was approved by the NATO Military Committee in December 1955. The system was to be based on four air defense regions (ADRs) coordinated by SACEUR (Supreme Allied Commander Europe). Starting from 1956 early warning coverage was extended across Western Europe using 18 radar stations. This part of the system was completed by 1962. Linked to existing national radar sites the coordinated system was called the NATO Air Defence Ground Environment (NADGE). From 1960 NATO countries agreed to place all their air defence forces under the command of SACEUR in the event of war. These forces included command & control (C2) systems, radar installations, and Surface-to-Air (SAM) missile units as well as interceptor aircraft. By 1972 NADGE was converted into NATINADS consisting of 84 radar sites and associated Control Reporting Centers (CRC) and in the 1980s the Airborne Early Warning / Ground Environment Integration Segment (AEGIS) upgraded the NATINADS with the possibility to integrate the AWACS radar picture and all of its information into its visual displays. (NOTE: This AEGIS is not to be confused with the U.S.Navy AEGIS, a shipboard fire control radar and weapons system.) AEGIS processed the information through Hughes H5118ME computers, which replaced the H3118M computers installed at NADGE sites in the late 1960s and early 1970s. NATINADS ability to handle data increased with faster clock rates. The H5118M computer had a staggering 1 megabyte of memory and could handle 1.2 million instructions per second while the former model had a memory of only 256 kilobytes and a clock speed of 150,000 instructions per seconds. NATINADS/AEGIS were complemented, in West Germany by the German Air Defence Ground Environment (GEADGE), an updated radar network adding the southern part of Germany to the European system and Coastal Radar Integration System (CRIS), adding data links from Danish coastal radars. In order to counter the hardware obsolescence, during the mid-1990s NATO started the AEGIS Site Emulator (ASE) program allowing the NATINADS/AEGIS sites to replace the proprietary hardware (the 5118ME computer and the various operator consoles IDM-2, HMD-22, IDM-80) with commercial-off-the-shelf (COTS) servers and workstations. In the first years 2000, the initial ASE capability was expanded with the possibility to run, thanks to the new hardware power, multiple site emulators on the same hardware, so the system was renamed into Multi-AEGIS Site Emulator (MASE). The NATO system designed to replace MASE in the near future is the Air Command and Control System (ACCS). Because of changing politics, NATO expanding and financial crises most European (NATO) countries are trying to cut defence budgets; as a direct result, many obsolete and outdated NATINADS facilities are phased out earlier. As of 2013, operational NATO radar sites in Europe are as follows: Allied Air Command (AIRCOM) is the central command of all NATO air forces on the European continent. The command is based at Ramstein Air Base in Germany and has two subordinate commands in Germany and Spain. The Royal Canadian Air Force and United States Air Force fall under command of the Canadian/American North American Aerospace Defense Command. The Albanian Air Force operates Lockheed Martin AN/TPS-77 radars. The Belgian Air Component's Control and Reporting Centre was based at Glons, where also its main radar was located. The radar was deactivated in 2015 and the Centre moved to Beauvechain Air Base in 2020. The Belgian Control and Reporting Centre reports to CAOC Uedem in Germany and is also responsible for guarding the airspace of Luxembourg. At the new location the Control and Reporting Centre uses digital radar data of the civilian radars of Belgocontrol and the Marconi S-723 radar of the Air Component's Air Traffic Control Centre in Semmerzake. The Bulgarian Air Force's Air Sovereignty Operations Centre is located in Sofia and reports to CAOC Torrejón. The Bulgarian Air Force fields three control and surveillance zones, which operate obsolete Soviet-era radars. The Bulgarian Air Force intends to replace these radars with fewer, but more capable Western 3-D radars as soon as possible. The future locations of the new radars are as of 2018 unknown. The Royal Canadian Air Force's control centres and radar stations are part of the Canadian/American North American Aerospace Defense Command. The Croatian Air Force and Air Defense's Airspace Surveillance Centre is headquartered in Podvornica and reports to CAOC Torrejón. The Czech Air Force's Control and Reporting Centre is located in Hlavenec and reports to CAOC Uedem. The Royal Danish Air Force's Combined Air Operations Centre (CAOC 1) in Finderup was deactivated in 2008 and replaced at the same location by the Combined Air Operations Centre Finderup (CAOC F), which had responsibility for the airspaces of Iceland, Norway, Denmark and the United Kingdom. CAOC F was deactivated in 2013 and its responsibilities were transferred to CAOC Uedem in Germany. The national Danish Control and Reporting Centre is located at Karup Air Base and it reports to CAOC Uedem. The Pituffik Space Base in Greenland is a United States Space Force installation and its radars are part of the North American Aerospace Defense Command and United States Space Command. The Estonian Air Force's Air Operations Control Centre is located at Ämari Air Base and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) in Karmėlava, Lithuania, which in turn reports to CAOC Uedem. The French Air and Space Force's Air Operations Centre is located at Mont Verdun Air Base and reports to CAOC Uedem. Most French radar sites use the PALMIER radar, which is being taken out of service. By 2022 all PALMIER radars will have been replaced with new radar stations using the GM 403 radar. Additionally the French Air and Space Force fields a GM 406 radar at the Cayenne-Rochambeau Air Base in French Guiana to protect the Guiana Space Centre in Kourou. The German Air Force's Combined Air Operations Centre (CAOC 2) in Uedem was deactivated in 2008 and reactivated as CAOC Uedem in 2013. CAOC Uedem is responsible for the NATO airspace North of the Alps. The HADR radars are a variant of the HR-3000 radar, while the RRP-117 radars are a variant of the AN/FPS-117. 1st Area Control Centre, inside Mount Chortiatis, with Marconi S-743D 2nd Area Control Centre, inside Mount Parnitha, with Marconi S-743D 9th Control and Warning Station Squadron, on Mount Pelion, with Marconi S-743D 10th Control and Warning Station Squadron, on Mount Chortiatis, with Marconi S-743D The Hellenic Air Force's Combined Air Operations Centre (CAOC 7) at Larissa Air Base was deactivated in 2013 and its responsibilities transferred to the CAOC Torrejón in Spain. The Hellenic Air Force fields two HR-3000, four AR-327 and six Marconi S-743D radar systems, however as of 2018 the air force is in the process of replacing some of its older systems with three RAT-31DL radars. The Hungarian Air Force's Air Operations Centre is located in Veszprém and reports to CAOC Uedem. There are additional three radar companies with Soviet-era equipment subordinate to the 54th Radar Regiment "Veszprém", however it is unclear if they will remain in service once Hungary's newest radar at Medina reaches full operational capability. The Iceland Air Defense System, which is part of the Icelandic Coast Guard, monitors Iceland's airspace. Air Defense is provided by fighter jets from NATO allies, which rotate units for the Icelandic Air Policing mission to Keflavik Air Base. The Iceland Air Defense System's Control and Reporting Centre is at Keflavik Air Base and reports to CAOC Uedem in Germany. The Italian Air Force's Combined Air Operations Centre (CAOC 5) in Poggio Renatico was deactivated in 2013 and replaced with the Mobile Command and Control Regiment (RMCC) at Bari Air Base, while the Centre's responsibilities were transferred to the CAOC Torrejón in Spain. The Latvian Air Force's Air Operations Centre is located at Lielvārde Air Base and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) in Karmėlava, Lithuania, which in turn reports to CAOC Uedem. The Lithuanian Air Force's Air Operations Control Centre is located in Karmėlava and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) co-located in Karmėlava, which in turn reports to CAOC Uedem. Luxembourg's airspace is monitored and guarded by the Belgian Air Component's Control and Reporting Centre at Beauvechain Air Base. The Armed Forces of Montenegro do not possess a modern air defense radar and the country's airspace is monitored by Italian Air Force radar sites. The Armed Forces Air Surveillance and Reporting Centre is located at Podgorica Airport in Golubovci and reports to CAOC Torrejón in Spain. The Royal Netherlands Air Force's Air Operations Centre is located at Nieuw-Milligen and reports to CAOC Uedem. The air force's main radars are being replaced with two modern SMART-L GB radars. The Royal Norwegian Air Force's Combined Air Operations Centre (CAOC 3) in Reitan was deactivated in 2008 and its responsibilities were transferred to the Combined Air Operations Centre Finderup (CAOC F). After CAOC F was deactivated in 2013 the responsibility for the air defense of Norway was transferred to CAOC Uedem in Germany and the Royal Norwegian Air Force's Control and Reporting Centre in Sørreisa reports to it. Until 2016 the Royal Norwegian Air Force's radar installations were distributed between two CRCs. That year the CRC Mågerø was disbanded. In its place a wartime mobilization back-up CRC has been formed with a reduction in personnel from the around active 170 duty to about 50 air force home guardsmen. The SINDRE I radars are a variant of the HR-3000 radar, which is also used in the German HADR radars. The newer RAT-31SL/N radars are sometimes designated SINDRE II. The Polish Armed Forces Operational Command's Air Operations Centre is located in the Warsaw-Pyry neighborhood and reports to CAOC Uedem. The 3rd Wrocław Radiotechnical Brigade is responsible for the operation of the Armed Forces radar equipment. As of 2021, the Polish Air Force possesses three NUR-12M and three RAT-31DL long-range radars making up BACKBONE system, which are listed below. The Portuguese Air Force's Combined Air Operations Centre (CAOC 10) in Lisbon was deactivated in 2013 and its responsibilities were transferred to CAOC Torrejón in Spain. The Romanian Air Force's Air Operations Centre is headquartered in Bucharest and reports to CAOC Torrejón. Additionally, the WSR-98D radar stations in Bârnova, Medgidia, Bobohalma, Timișoara, and Oradea are officially designated and operated as a civilian radar stations by the National Meteorological Administration, however their data is fed into the military air surveillance system as well. The Slovak Air Force's Air Operations Centre is located at Zvolen and reports to CAOC Uedem. The Slovak Air Force still operates obsolete Soviet-era radars, that are being replaced by Israeli made medium and short rangeEL/M-2084radars. Slovak Air Force also operates five mobil 3D long range surveillance radars TRMS 3-D LÜR. The Slovenian Air Force and Air Defense's Airspace Surveillance and Control Centre is headquartered in Brnik and reports to CAOC Torrejón. The Italian Air Force's 4th Wing at Grosseto Air Base and 36th Wing at Gioia del Colle Air Base rotate a QRA flight of Eurofighter Typhoons to Istrana Air Base, which are responsible for the air defense of Northern Italy and Slovenia. The Spanish Air Force's Combined Air Operations Centre (CAOC 8) at Torrejón Air Base was deactivated in 2013 and replaced at same location by CAOC Torrejon, which took over the functions of CAOC 5, CAOC 7, CAOC 8 and CAOC 10. CAOC Torrejón is responsible for the NATO airspace South of the Alps. The Turkish Air Force's Combined Air Operations Centre (CAOC 6) in Eskişehir was deactivated in 2013 and its responsibilities were transferred to CAOC Torrejón in Spain. Turkey's Air Force fields a mix of HR-3000, AN/FPS-117, RAT-31SL and RAT-31DL radars, however the exact number of each of these radar and their location in the Turkish radar system is unknown. The Royal Air Force's Air Surveillance and Control System is located at RAF Boulmer, and reports to CAOC Uedem. The RAF operates seven Remote Radar Heads (RRHs) across the UK, which feed back to the Control and Reporting Centre at RAF Boulmer. Under Project Guardian, all of the UK’s radar stations and systems are being upgraded and strengthened. The UK is also unique in Europe in possessing a Ballistic Missile Early Warning System (BMEWS) which is based at RAF Fylingdales. The United States Air Force's control centres and radar stations are part of the Canadian/American North American Aerospace Defense Command.
[ { "paragraph_id": 0, "text": "The NATO Integrated Air Defense System (short: NATINADS) is a command and control network combining radars and other facilities spread throughout the NATO alliance's air defence forces. It formed in the mid-1950s and became operational in 1962 as NADGE. It has been constantly upgraded since its formation, notably with the integration of Airborne Early Warning aircraft in the 1970s. The United Kingdom maintained its own network, but was fully integrated with the network since the introduction of the Linesman/Mediator network in the 1970s. Similarly, the German network maintained an independent nature through GEADGE.", "title": "" }, { "paragraph_id": 1, "text": "Development was approved by the NATO Military Committee in December 1955. The system was to be based on four air defense regions (ADRs) coordinated by SACEUR (Supreme Allied Commander Europe). Starting from 1956 early warning coverage was extended across Western Europe using 18 radar stations. This part of the system was completed by 1962. Linked to existing national radar sites the coordinated system was called the NATO Air Defence Ground Environment (NADGE).", "title": "Development" }, { "paragraph_id": 2, "text": "From 1960 NATO countries agreed to place all their air defence forces under the command of SACEUR in the event of war. These forces included command & control (C2) systems, radar installations, and Surface-to-Air (SAM) missile units as well as interceptor aircraft.", "title": "Development" }, { "paragraph_id": 3, "text": "By 1972 NADGE was converted into NATINADS consisting of 84 radar sites and associated Control Reporting Centers (CRC) and in the 1980s the Airborne Early Warning / Ground Environment Integration Segment (AEGIS) upgraded the NATINADS with the possibility to integrate the AWACS radar picture and all of its information into its visual displays. (NOTE: This AEGIS is not to be confused with the U.S.Navy AEGIS, a shipboard fire control radar and weapons system.) AEGIS processed the information through Hughes H5118ME computers, which replaced the H3118M computers installed at NADGE sites in the late 1960s and early 1970s.", "title": "Development" }, { "paragraph_id": 4, "text": "NATINADS ability to handle data increased with faster clock rates. The H5118M computer had a staggering 1 megabyte of memory and could handle 1.2 million instructions per second while the former model had a memory of only 256 kilobytes and a clock speed of 150,000 instructions per seconds.", "title": "Development" }, { "paragraph_id": 5, "text": "NATINADS/AEGIS were complemented, in West Germany by the German Air Defence Ground Environment (GEADGE), an updated radar network adding the southern part of Germany to the European system and Coastal Radar Integration System (CRIS), adding data links from Danish coastal radars.", "title": "Development" }, { "paragraph_id": 6, "text": "In order to counter the hardware obsolescence, during the mid-1990s NATO started the AEGIS Site Emulator (ASE) program allowing the NATINADS/AEGIS sites to replace the proprietary hardware (the 5118ME computer and the various operator consoles IDM-2, HMD-22, IDM-80) with commercial-off-the-shelf (COTS) servers and workstations.", "title": "Development" }, { "paragraph_id": 7, "text": "In the first years 2000, the initial ASE capability was expanded with the possibility to run, thanks to the new hardware power, multiple site emulators on the same hardware, so the system was renamed into Multi-AEGIS Site Emulator (MASE). The NATO system designed to replace MASE in the near future is the Air Command and Control System (ACCS).", "title": "Development" }, { "paragraph_id": 8, "text": "Because of changing politics, NATO expanding and financial crises most European (NATO) countries are trying to cut defence budgets; as a direct result, many obsolete and outdated NATINADS facilities are phased out earlier. As of 2013, operational NATO radar sites in Europe are as follows:", "title": "Development" }, { "paragraph_id": 9, "text": "Allied Air Command (AIRCOM) is the central command of all NATO air forces on the European continent. The command is based at Ramstein Air Base in Germany and has two subordinate commands in Germany and Spain. The Royal Canadian Air Force and United States Air Force fall under command of the Canadian/American North American Aerospace Defense Command.", "title": "Allied Air Command" }, { "paragraph_id": 10, "text": "The Albanian Air Force operates Lockheed Martin AN/TPS-77 radars.", "title": "Radar stations" }, { "paragraph_id": 11, "text": "The Belgian Air Component's Control and Reporting Centre was based at Glons, where also its main radar was located. The radar was deactivated in 2015 and the Centre moved to Beauvechain Air Base in 2020. The Belgian Control and Reporting Centre reports to CAOC Uedem in Germany and is also responsible for guarding the airspace of Luxembourg. At the new location the Control and Reporting Centre uses digital radar data of the civilian radars of Belgocontrol and the Marconi S-723 radar of the Air Component's Air Traffic Control Centre in Semmerzake.", "title": "Radar stations" }, { "paragraph_id": 12, "text": "The Bulgarian Air Force's Air Sovereignty Operations Centre is located in Sofia and reports to CAOC Torrejón. The Bulgarian Air Force fields three control and surveillance zones, which operate obsolete Soviet-era radars. The Bulgarian Air Force intends to replace these radars with fewer, but more capable Western 3-D radars as soon as possible. The future locations of the new radars are as of 2018 unknown.", "title": "Radar stations" }, { "paragraph_id": 13, "text": "The Royal Canadian Air Force's control centres and radar stations are part of the Canadian/American North American Aerospace Defense Command.", "title": "Radar stations" }, { "paragraph_id": 14, "text": "The Croatian Air Force and Air Defense's Airspace Surveillance Centre is headquartered in Podvornica and reports to CAOC Torrejón.", "title": "Radar stations" }, { "paragraph_id": 15, "text": "The Czech Air Force's Control and Reporting Centre is located in Hlavenec and reports to CAOC Uedem.", "title": "Radar stations" }, { "paragraph_id": 16, "text": "The Royal Danish Air Force's Combined Air Operations Centre (CAOC 1) in Finderup was deactivated in 2008 and replaced at the same location by the Combined Air Operations Centre Finderup (CAOC F), which had responsibility for the airspaces of Iceland, Norway, Denmark and the United Kingdom. CAOC F was deactivated in 2013 and its responsibilities were transferred to CAOC Uedem in Germany. The national Danish Control and Reporting Centre is located at Karup Air Base and it reports to CAOC Uedem.", "title": "Radar stations" }, { "paragraph_id": 17, "text": "The Pituffik Space Base in Greenland is a United States Space Force installation and its radars are part of the North American Aerospace Defense Command and United States Space Command.", "title": "Radar stations" }, { "paragraph_id": 18, "text": "The Estonian Air Force's Air Operations Control Centre is located at Ämari Air Base and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) in Karmėlava, Lithuania, which in turn reports to CAOC Uedem.", "title": "Radar stations" }, { "paragraph_id": 19, "text": "The French Air and Space Force's Air Operations Centre is located at Mont Verdun Air Base and reports to CAOC Uedem. Most French radar sites use the PALMIER radar, which is being taken out of service. By 2022 all PALMIER radars will have been replaced with new radar stations using the GM 403 radar.", "title": "Radar stations" }, { "paragraph_id": 20, "text": "Additionally the French Air and Space Force fields a GM 406 radar at the Cayenne-Rochambeau Air Base in French Guiana to protect the Guiana Space Centre in Kourou.", "title": "Radar stations" }, { "paragraph_id": 21, "text": "The German Air Force's Combined Air Operations Centre (CAOC 2) in Uedem was deactivated in 2008 and reactivated as CAOC Uedem in 2013. CAOC Uedem is responsible for the NATO airspace North of the Alps. The HADR radars are a variant of the HR-3000 radar, while the RRP-117 radars are a variant of the AN/FPS-117.", "title": "Radar stations" }, { "paragraph_id": 22, "text": "1st Area Control Centre, inside Mount Chortiatis, with Marconi S-743D 2nd Area Control Centre, inside Mount Parnitha, with Marconi S-743D 9th Control and Warning Station Squadron, on Mount Pelion, with Marconi S-743D 10th Control and Warning Station Squadron, on Mount Chortiatis, with Marconi S-743D", "title": "Radar stations" }, { "paragraph_id": 23, "text": "The Hellenic Air Force's Combined Air Operations Centre (CAOC 7) at Larissa Air Base was deactivated in 2013 and its responsibilities transferred to the CAOC Torrejón in Spain. The Hellenic Air Force fields two HR-3000, four AR-327 and six Marconi S-743D radar systems, however as of 2018 the air force is in the process of replacing some of its older systems with three RAT-31DL radars.", "title": "Radar stations" }, { "paragraph_id": 24, "text": "The Hungarian Air Force's Air Operations Centre is located in Veszprém and reports to CAOC Uedem. There are additional three radar companies with Soviet-era equipment subordinate to the 54th Radar Regiment \"Veszprém\", however it is unclear if they will remain in service once Hungary's newest radar at Medina reaches full operational capability.", "title": "Radar stations" }, { "paragraph_id": 25, "text": "The Iceland Air Defense System, which is part of the Icelandic Coast Guard, monitors Iceland's airspace. Air Defense is provided by fighter jets from NATO allies, which rotate units for the Icelandic Air Policing mission to Keflavik Air Base. The Iceland Air Defense System's Control and Reporting Centre is at Keflavik Air Base and reports to CAOC Uedem in Germany.", "title": "Radar stations" }, { "paragraph_id": 26, "text": "The Italian Air Force's Combined Air Operations Centre (CAOC 5) in Poggio Renatico was deactivated in 2013 and replaced with the Mobile Command and Control Regiment (RMCC) at Bari Air Base, while the Centre's responsibilities were transferred to the CAOC Torrejón in Spain.", "title": "Radar stations" }, { "paragraph_id": 27, "text": "The Latvian Air Force's Air Operations Centre is located at Lielvārde Air Base and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) in Karmėlava, Lithuania, which in turn reports to CAOC Uedem.", "title": "Radar stations" }, { "paragraph_id": 28, "text": "The Lithuanian Air Force's Air Operations Control Centre is located in Karmėlava and reports to the Baltic Air Surveillance Network's Regional Airspace Surveillance Coordination Centre (RASCC) co-located in Karmėlava, which in turn reports to CAOC Uedem.", "title": "Radar stations" }, { "paragraph_id": 29, "text": "Luxembourg's airspace is monitored and guarded by the Belgian Air Component's Control and Reporting Centre at Beauvechain Air Base.", "title": "Radar stations" }, { "paragraph_id": 30, "text": "The Armed Forces of Montenegro do not possess a modern air defense radar and the country's airspace is monitored by Italian Air Force radar sites. The Armed Forces Air Surveillance and Reporting Centre is located at Podgorica Airport in Golubovci and reports to CAOC Torrejón in Spain.", "title": "Radar stations" }, { "paragraph_id": 31, "text": "The Royal Netherlands Air Force's Air Operations Centre is located at Nieuw-Milligen and reports to CAOC Uedem. The air force's main radars are being replaced with two modern SMART-L GB radars.", "title": "Radar stations" }, { "paragraph_id": 32, "text": "The Royal Norwegian Air Force's Combined Air Operations Centre (CAOC 3) in Reitan was deactivated in 2008 and its responsibilities were transferred to the Combined Air Operations Centre Finderup (CAOC F). After CAOC F was deactivated in 2013 the responsibility for the air defense of Norway was transferred to CAOC Uedem in Germany and the Royal Norwegian Air Force's Control and Reporting Centre in Sørreisa reports to it. Until 2016 the Royal Norwegian Air Force's radar installations were distributed between two CRCs. That year the CRC Mågerø was disbanded. In its place a wartime mobilization back-up CRC has been formed with a reduction in personnel from the around active 170 duty to about 50 air force home guardsmen. The SINDRE I radars are a variant of the HR-3000 radar, which is also used in the German HADR radars. The newer RAT-31SL/N radars are sometimes designated SINDRE II.", "title": "Radar stations" }, { "paragraph_id": 33, "text": "The Polish Armed Forces Operational Command's Air Operations Centre is located in the Warsaw-Pyry neighborhood and reports to CAOC Uedem. The 3rd Wrocław Radiotechnical Brigade is responsible for the operation of the Armed Forces radar equipment. As of 2021, the Polish Air Force possesses three NUR-12M and three RAT-31DL long-range radars making up BACKBONE system, which are listed below.", "title": "Radar stations" }, { "paragraph_id": 34, "text": "The Portuguese Air Force's Combined Air Operations Centre (CAOC 10) in Lisbon was deactivated in 2013 and its responsibilities were transferred to CAOC Torrejón in Spain.", "title": "Radar stations" }, { "paragraph_id": 35, "text": "The Romanian Air Force's Air Operations Centre is headquartered in Bucharest and reports to CAOC Torrejón. Additionally, the WSR-98D radar stations in Bârnova, Medgidia, Bobohalma, Timișoara, and Oradea are officially designated and operated as a civilian radar stations by the National Meteorological Administration, however their data is fed into the military air surveillance system as well.", "title": "Radar stations" }, { "paragraph_id": 36, "text": "The Slovak Air Force's Air Operations Centre is located at Zvolen and reports to CAOC Uedem. The Slovak Air Force still operates obsolete Soviet-era radars, that are being replaced by Israeli made medium and short rangeEL/M-2084radars. Slovak Air Force also operates five mobil 3D long range surveillance radars TRMS 3-D LÜR.", "title": "Radar stations" }, { "paragraph_id": 37, "text": "The Slovenian Air Force and Air Defense's Airspace Surveillance and Control Centre is headquartered in Brnik and reports to CAOC Torrejón.", "title": "Radar stations" }, { "paragraph_id": 38, "text": "The Italian Air Force's 4th Wing at Grosseto Air Base and 36th Wing at Gioia del Colle Air Base rotate a QRA flight of Eurofighter Typhoons to Istrana Air Base, which are responsible for the air defense of Northern Italy and Slovenia.", "title": "Radar stations" }, { "paragraph_id": 39, "text": "The Spanish Air Force's Combined Air Operations Centre (CAOC 8) at Torrejón Air Base was deactivated in 2013 and replaced at same location by CAOC Torrejon, which took over the functions of CAOC 5, CAOC 7, CAOC 8 and CAOC 10. CAOC Torrejón is responsible for the NATO airspace South of the Alps.", "title": "Radar stations" }, { "paragraph_id": 40, "text": "The Turkish Air Force's Combined Air Operations Centre (CAOC 6) in Eskişehir was deactivated in 2013 and its responsibilities were transferred to CAOC Torrejón in Spain. Turkey's Air Force fields a mix of HR-3000, AN/FPS-117, RAT-31SL and RAT-31DL radars, however the exact number of each of these radar and their location in the Turkish radar system is unknown.", "title": "Radar stations" }, { "paragraph_id": 41, "text": "The Royal Air Force's Air Surveillance and Control System is located at RAF Boulmer, and reports to CAOC Uedem. The RAF operates seven Remote Radar Heads (RRHs) across the UK, which feed back to the Control and Reporting Centre at RAF Boulmer. Under Project Guardian, all of the UK’s radar stations and systems are being upgraded and strengthened. The UK is also unique in Europe in possessing a Ballistic Missile Early Warning System (BMEWS) which is based at RAF Fylingdales.", "title": "Radar stations" }, { "paragraph_id": 42, "text": "The United States Air Force's control centres and radar stations are part of the Canadian/American North American Aerospace Defense Command.", "title": "Radar stations" } ]
The NATO Integrated Air Defense System is a command and control network combining radars and other facilities spread throughout the NATO alliance's air defence forces. It formed in the mid-1950s and became operational in 1962 as NADGE. It has been constantly upgraded since its formation, notably with the integration of Airborne Early Warning aircraft in the 1970s. The United Kingdom maintained its own network, but was fully integrated with the network since the introduction of the Linesman/Mediator network in the 1970s. Similarly, the German network maintained an independent nature through GEADGE.
2002-02-25T15:51:15Z
2023-11-30T11:35:44Z
[ "Template:Ill", "Template:Reflist", "Template:Cite web", "Template:Cite news", "Template:Citation", "Template:Short description", "Template:Location map " ]
https://en.wikipedia.org/wiki/NATO_Integrated_Air_Defense_System
15,382
Invisible balance
The invisible balance or balance of trade on services is that part of the balance of trade that refers to services and other products that do not result in the transfer of physical objects. Examples include consulting services, shipping services, tourism, and patent license revenues. This figure is usually generated by tertiary industry. The term 'invisible balance' is especially common in the United Kingdom. For countries that rely on service exports or on tourism, the invisible balance is particularly important. For instance the United Kingdom and Saudi Arabia receive significant international income from financial services, while Japan and Germany rely more on exports of manufactured goods. Invisibles are both international payments for services (as opposed to goods), as well as movements of money without exchange for goods or services. These invisibles are called 'transfer payments' or 'remittances' and may include money sent from one country to another by an individual, business, government or non-governmental organisations (NGO) – often charities. An individual remittance may include money sent to a relative overseas. Business transfers may include profits sent by a foreign subsidiary to a parent company or money invested by a business in a foreign country. Bank loans to foreign countries are also included in this category, as are license fees paid for the use of patents and trademarks. Government transfers may involve loans made or official aid given to foreign countries, while transfers made by NGO's include money designated for charitable work within foreign countries, respectively. In many countries a useful distinction is drawn between the balance of trade and the balance of payments. 'Balance of trade' refers to the trade of both tangible (physical) objects as well as the trade in services – collectively known as exports and imports (in other words, 'visibles plus services') – while the 'balance of payments' also includes transfers of Capital in the form of loans, investments in shares or direct investment in projects. A nation may have a visibles balance surplus but this can be offset by a larger deficit in the invisibles balance (creating a Balance of Trade deficit overall) – if, for example, there are large payments made to foreign businesses for invisibles such as shipping or tourism. On the other hand, a Visibles Balance deficit can be offset by a strong surplus on the invisibles balance if, for example, foreign aid is being provided. In a similar way, a nation may also have a surplus 'balance of trade' because it exports more than it imports but a negative (or deficit) 'balance of payments' because, it has a much greater shortfall in transfers of capital. And, just as easily, a deficit in the 'balance of trade' may be offset by a larger surplus in capital transfers from overseas to produce a balance of payments surplus overall. Problems with a country's balance of trade (or balance of payments) are often associated with an inappropriate valuation of its currency, its country's foreign exchange rate. If a country's exchange rate is too high, its exports will become uncompetitive as buyers in foreign countries require more of their own currency to pay for them. In the meantime, it also becomes cheaper for the citizens of the country to buy goods from overseas, as opposed to buying locally produced goods), because an overvalued currency makes foreign products less expensive. The simultaneous decline in currency inflows from decreased exports and the rise in outflows, due to increased imports, sends the balance of trade into deficit, which then needs to be paid for by a transfer of funds in some form, either invisible transfers (aid, etc.) or capital flows (loans, etc.). However, relying on funds like that to support a trade deficit, is unsustainable, and the country may eventually require its currency to be devalued. If, on the other hand, a currency is undervalued, its exports will become cheaper and therefore more competitive internationally. At the same time, imports will also become more costly, stimulating the production of domestic substitutes to replace them. That will result in a growth of currency flowing into the country and a decline in currency flowing out of it, resulting in an improvement in the country's balance of trade. Because a nation's exchange rate has a big impact on its 'balance of trade' and its 'balance of payments', many economists favour freely floating exchange rates over the older, fixed (or pegged) rates of foreign currency exchange. Floating exchange rates allow more regular adjustments in exchange rates to occur, allowing the greater opportunity for international payments to maintain equilibrium.
[ { "paragraph_id": 0, "text": "The invisible balance or balance of trade on services is that part of the balance of trade that refers to services and other products that do not result in the transfer of physical objects. Examples include consulting services, shipping services, tourism, and patent license revenues. This figure is usually generated by tertiary industry. The term 'invisible balance' is especially common in the United Kingdom.", "title": "" }, { "paragraph_id": 1, "text": "For countries that rely on service exports or on tourism, the invisible balance is particularly important. For instance the United Kingdom and Saudi Arabia receive significant international income from financial services, while Japan and Germany rely more on exports of manufactured goods.", "title": "" }, { "paragraph_id": 2, "text": "Invisibles are both international payments for services (as opposed to goods), as well as movements of money without exchange for goods or services. These invisibles are called 'transfer payments' or 'remittances' and may include money sent from one country to another by an individual, business, government or non-governmental organisations (NGO) – often charities.", "title": "Types of invisibles" }, { "paragraph_id": 3, "text": "An individual remittance may include money sent to a relative overseas. Business transfers may include profits sent by a foreign subsidiary to a parent company or money invested by a business in a foreign country. Bank loans to foreign countries are also included in this category, as are license fees paid for the use of patents and trademarks. Government transfers may involve loans made or official aid given to foreign countries, while transfers made by NGO's include money designated for charitable work within foreign countries, respectively.", "title": "Types of invisibles" }, { "paragraph_id": 4, "text": "In many countries a useful distinction is drawn between the balance of trade and the balance of payments. 'Balance of trade' refers to the trade of both tangible (physical) objects as well as the trade in services – collectively known as exports and imports (in other words, 'visibles plus services') – while the 'balance of payments' also includes transfers of Capital in the form of loans, investments in shares or direct investment in projects.", "title": "Balance of payments and invisibles" }, { "paragraph_id": 5, "text": "A nation may have a visibles balance surplus but this can be offset by a larger deficit in the invisibles balance (creating a Balance of Trade deficit overall) – if, for example, there are large payments made to foreign businesses for invisibles such as shipping or tourism. On the other hand, a Visibles Balance deficit can be offset by a strong surplus on the invisibles balance if, for example, foreign aid is being provided.", "title": "Balance of payments and invisibles" }, { "paragraph_id": 6, "text": "In a similar way, a nation may also have a surplus 'balance of trade' because it exports more than it imports but a negative (or deficit) 'balance of payments' because, it has a much greater shortfall in transfers of capital. And, just as easily, a deficit in the 'balance of trade' may be offset by a larger surplus in capital transfers from overseas to produce a balance of payments surplus overall.", "title": "Balance of payments and invisibles" }, { "paragraph_id": 7, "text": "Problems with a country's balance of trade (or balance of payments) are often associated with an inappropriate valuation of its currency, its country's foreign exchange rate.", "title": "Balance of payments problems and the invisible balance" }, { "paragraph_id": 8, "text": "If a country's exchange rate is too high, its exports will become uncompetitive as buyers in foreign countries require more of their own currency to pay for them. In the meantime, it also becomes cheaper for the citizens of the country to buy goods from overseas, as opposed to buying locally produced goods), because an overvalued currency makes foreign products less expensive.", "title": "Balance of payments problems and the invisible balance" }, { "paragraph_id": 9, "text": "The simultaneous decline in currency inflows from decreased exports and the rise in outflows, due to increased imports, sends the balance of trade into deficit, which then needs to be paid for by a transfer of funds in some form, either invisible transfers (aid, etc.) or capital flows (loans, etc.). However, relying on funds like that to support a trade deficit, is unsustainable, and the country may eventually require its currency to be devalued.", "title": "Balance of payments problems and the invisible balance" }, { "paragraph_id": 10, "text": "If, on the other hand, a currency is undervalued, its exports will become cheaper and therefore more competitive internationally. At the same time, imports will also become more costly, stimulating the production of domestic substitutes to replace them. That will result in a growth of currency flowing into the country and a decline in currency flowing out of it, resulting in an improvement in the country's balance of trade.", "title": "Balance of payments problems and the invisible balance" }, { "paragraph_id": 11, "text": "Because a nation's exchange rate has a big impact on its 'balance of trade' and its 'balance of payments', many economists favour freely floating exchange rates over the older, fixed (or pegged) rates of foreign currency exchange. Floating exchange rates allow more regular adjustments in exchange rates to occur, allowing the greater opportunity for international payments to maintain equilibrium.", "title": "Balance of payments problems and the invisible balance" } ]
The invisible balance or balance of trade on services is that part of the balance of trade that refers to services and other products that do not result in the transfer of physical objects. Examples include consulting services, shipping services, tourism, and patent license revenues. This figure is usually generated by tertiary industry. The term 'invisible balance' is especially common in the United Kingdom. For countries that rely on service exports or on tourism, the invisible balance is particularly important. For instance the United Kingdom and Saudi Arabia receive significant international income from financial services, while Japan and Germany rely more on exports of manufactured goods.
2022-02-05T04:45:34Z
[ "Template:Unreferenced" ]
https://en.wikipedia.org/wiki/Invisible_balance
15,387
Irreducible complexity
Irreducible complexity (IC) is the argument that certain biological systems with multiple interacting parts would not function if one of the parts were removed, so supposedly could not have evolved by successive small modifications from earlier less complex systems through natural selection, which would need all intermediate precursor systems to have been fully functional. This negative argument is then complemented by the claim that the only alternative explanation is a "purposeful arrangement of parts" inferring design by an intelligent agent. Irreducible complexity has become central to the creationist concept of intelligent design (ID), but the concept of irreducible complexity has been rejected by the scientific community, which regards intelligent design as pseudoscience. Irreducible complexity and specified complexity, are the two main arguments used by intelligent-design proponents to support their version of the theological argument from design. The central concept, of biological complexity too improbable to have evolved by chance natural processes, was already featured in creation science. The 1989 school textbook Of Pandas and People introduced the alternative terminology of intelligent design, the 1993 edition was revised to include a variation of the same argument: it was later shown that these revisions were written by Michael Behe, a professor of biochemistry at Lehigh University. Behe introduced the expression irreducible complexity along with a full account of his arguments in his 1996 book Darwin's Black Box, and he said it made evolution through natural selection of random mutations impossible, or extremely improbable. This was based on the mistaken assumption that evolution relies on improvement of existing functions, ignoring how complex adaptations originate from changes in function, and disregarding published research. Evolutionary biologists have published rebuttals showing how systems discussed by Behe can evolve, and examples documented through comparative genomics show that complex molecular systems are formed by the addition of components as revealed by different temporal origins of their proteins. In the 2005 Kitzmiller v. Dover Area School District trial, Behe gave testimony on the subject of irreducible complexity. The court found that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large." Michael Behe defined irreducible complexity in natural selection in terms of well-matched parts in his 1996 book Darwin's Black Box: ... a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning. A second definition given by Behe in 2000 (his "evolutionary definition") states: An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway. Intelligent-design advocate William A. Dembski assumed an "original function" in his 2002 definition: A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system. The argument from irreducible complexity is a descendant of the teleological argument for God (the argument from design or from complexity). This states that complex functionality in the natural world which looks designed is evidence of an intelligent creator. William Paley famously argued, in his 1802 watchmaker analogy, that complexity in nature implies a God for the same reason that the existence of a watch implies the existence of a watchmaker. This argument has a long history, and one can trace it back at least as far as Cicero's De Natura Deorum ii.34, written in 45 BC. Galen (1st and 2nd centuries AD) wrote about the large number of parts of the body and their relationships, which observation was cited as evidence for creation. The idea that the interdependence between parts would have implications for the origins of living things was raised by writers starting with Pierre Gassendi in the mid-17th century and by John Wilkins (1614–1672), who wrote (citing Galen), "Now to imagine, that all these things, according to their several kinds, could be brought into this regular frame and order, to which such an infinite number of Intentions are required, without the contrivance of some wise Agent, must needs be irrational in the highest degree." In the late 17th-century, Thomas Burnet referred to "a multitude of pieces aptly joyn'd" to argue against the eternity of life. In the early 18th century, Nicolas Malebranche wrote "An organized body contains an infinity of parts that mutually depend upon one another in relation to particular ends, all of which must be actually formed in order to work as a whole", arguing in favor of preformation, rather than epigenesis, of the individual; and a similar argument about the origins of the individual was made by other 18th-century students of natural history. In his 1790 book, The Critique of Judgment, Kant is said by Guyer to argue that "we cannot conceive how a whole that comes into being only gradually from its parts can nevertheless be the cause of the properties of those parts". Chapter XV of Paley's Natural Theology discusses at length what he called "relations" of parts of living things as an indication of their design. Georges Cuvier applied his principle of the correlation of parts to describe an animal from fragmentary remains. For Cuvier, this related to another principle of his, the conditions of existence, which excluded the possibility of transmutation of species. While he did not originate the term, Charles Darwin identified the argument as a possible way to falsify a prediction of the theory of evolution at the outset. In The Origin of Species (1859), he wrote, "If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case." Darwin's theory of evolution challenges the teleological argument by postulating an alternative explanation to that of an intelligent designer—namely, evolution by natural selection. By showing how simple unintelligent forces can ratchet up designs of extraordinary complexity without invoking outside design, Darwin showed that an intelligent designer was not the necessary conclusion to draw from complexity in nature. The argument from irreducible complexity attempts to demonstrate that certain biological features cannot be purely the product of Darwinian evolution. In the late 19th century, in a dispute between supporters of the adequacy of natural selection and those who held for inheritance of acquired characteristics, one of the arguments made repeatedly by Herbert Spencer, and followed by others, depended on what Spencer referred to as co-adaptation of co-operative parts, as in: "We come now to Professor Weismann's endeavour to disprove my second thesis — that it is impossible to explain by natural selection alone the co-adaptation of co-operative parts. It is thirty years since this was set forth in 'The Principles of Biology.' In § 166, I instanced the enormous horns of the extinct Irish elk, and contended that in this and in kindred cases, where for the efficient use of some one enlarged part many other parts have to be simultaneously enlarged, it is out of the question to suppose that they can have all spontaneously varied in the required proportions." Darwin responded to Spencer's objections in chapter XXV of The Variation of Animals and Plants Under Domestication (1868). The history of this concept in the dispute has been characterized: "An older and more religious tradition of idealist thinkers were committed to the explanation of complex adaptive contrivances by intelligent design. ... Another line of thinkers, unified by the recurrent publications of Herbert Spencer, also saw co-adaptation as a composed, irreducible whole, but sought to explain it by the inheritance of acquired characteristics." St. George Jackson Mivart raised the objection to natural selection that "Complex and simultaneous co-ordinations ... until so far developed as to effect the requisite junctions, are useless". In the 2012 book Evolution and Belief, Confessions of a Religious Paleontologist, Robert J. Asher said this "amounts to the concept of 'irreducible complexity' as defined by ... Michael Behe". Hermann Muller, in the early 20th century, discussed a concept similar to irreducible complexity. However, far from seeing this as a problem for evolution, he described the "interlocking" of biological features as a consequence to be expected of evolution, which would lead to irreversibility of some evolutionary changes. He wrote, "Being thus finally woven, as it were, into the most intimate fabric of the organism, the once novel character can no longer be withdrawn with impunity, and may have become vitally necessary." In 1975 Thomas H. Frazzetta published a book-length study of a concept similar to irreducible complexity, explained by gradual, step-wise, non-teleological evolution. Frazzetta wrote: "A complex adaptation is one constructed of several components that must blend together operationally to make the adaptation 'work'. It is analogous to a machine whose performance depends upon careful cooperation among its parts. In the case of the machine, no single part can greatly be altered without changing the performance of the entire machine." The machine that he chose as an analog is the Peaucellier–Lipkin linkage, and one biological system given extended description was the jaw apparatus of a python. The conclusion of this investigation, rather than that evolution of a complex adaptation was impossible, "awed by the adaptations of living things, to be stunned by their complexity and suitability", was "to accept the inescapable but not humiliating fact that much of mankind can be seen in a tree or a lizard." In 1985 Cairns-Smith wrote of "interlocking": "How can a complex collaboration between components evolve in small steps?" and used the analogy of the scaffolding called centering – used to build an arch then removed afterwards: "Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean together they had to lean on something else." However, neither Muller or Cairns-Smith claimed their ideas as evidence of something supernatural. An early concept of irreducibly complex systems comes from Ludwig von Bertalanffy (1901–1972), an Austrian biologist. He believed that complex systems must be examined as complete, irreducible systems in order to fully understand how they work. He extended his work on biological complexity into a general theory of systems in a book titled General Systems Theory. After James Watson and Francis Crick published the structure of DNA in the early 1950s, General Systems Theory lost many of its adherents in the physical and biological sciences. However, systems theory remained popular in the social sciences long after its demise in the physical and biological sciences. Versions of the irreducible complexity argument have been common in young Earth creationist (YEC) creation science journals. For example, in the July 1965 issue of Creation Research Society Quarterly Harold W. Clark argued that the complex interaction of yucca moths with the plants they fertilize would not function if it was incomplete, so could not have evolved; "The whole procedure points so strongly to intelligent design that it is difficult to escape the conclusion that the hand of a wise and beneficent creator has been involved." In 1974 the YEC Henry M. Morris introduced an irreducible complexity concept in his creation science book Scientific Creationism, in which he wrote; "The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident." He continued; "This issue can actually be attacked quantitatively, using simple principles of mathematical probability. The problem is simply whether a complex system, in which many components function unitedly together, and in which each component is uniquely necessary to the efficient functioning of the whole, could ever arise by random processes." In 1975 Duane Gish wrote in The Amazing Story of Creation from Science and the Bible; "The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident." A 1980 article in the creation science magazine Creation by the YEC Ariel A. Roth said "Creation and various other views can be supported by the scientific data that reveal that the spontaneous origin of the complex integrated biochemical systems of even the simplest organisms is, at best, a most improbable event". In 1981, defending the creation science position in the trial McLean v. Arkansas, Roth said of "complex integrated structures": "This system would not be functional until all the parts were there ... How did these parts survive during evolution ...?" In 1985, countering the creationist claims that all the changes would be needed at once, Cairns-Smith wrote of "interlocking": "How can a complex collaboration between components evolve in small steps?" and used the analogy of the scaffolding called centering – used to build an arch then removed afterwards: "Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean together they had to lean on something else." Neither Muller or Cairns-Smith said their ideas were evidence of anything supernatural. The bacterial flagellum featured in creation science literature. Morris later claimed that one of their Institute for Creation Research "scientists (the late Dr. Dick Bliss) was using this example in his talks on creation a generation ago". In December 1992 the creation science magazine Creation called bacterial flagella "rotary engines", and dismissed the possibility that these "incredibly complicated arrangements of matter" could have "evolved by selection of chance mutations. The alternative explanation, that they were created, is much more reasonable." An article in the Creation Research Society Magazine for June 1994 called a flagellum a "bacterial nanomachine", forming the "bacterial rotor-flagellar complex" where "it is clear from the details of their operation that nothing about them works unless every one of their complexly fashioned and integrated components are in place", hard to explain by natural selection. The abstract said that in "terms of biophysical complexity, the bacterial rotor-flagellum is without precedent in the living world. .... To evolutionists, the system presents an enigma; to creationists, if offers clear and compelling evidence of purposeful intelligent design." The biology supplementary textbook for schools Of Pandas and People was drafted presenting creation science arguments, but shortly after the Edwards v. Aguillard ruling, that it was unconstitutional to teach creationism in public school science classes, the authors changed the wording to "intelligent design", introducing the new meaning of this term when the book was published in 1989. In a separate response to the same ruling, law professor Phillip E. Johnson wrote Darwin on Trial, published in 1991, and at a conference in March 1992 brought together key figures in what he later called the 'wedge movement', including biochemistry professor Michael Behe. According to Johnson, around 1992 Behe developed his ideas of what he later called his "irreducible complexity" concept, and first presented these ideas in June 1993 when the "Johnson-Behe cadre of scholars" met at Pajaro Dunes in California. The second edition of Of Pandas and People, published in 1993, had extensive revisions to Chapter 6 Biochemical Similarities with new sections on the complex mechanism of blood clotting and on the origin of proteins. Behe was not named as their author, but in Doubts About Darwin: A History of Intelligent Design, published in 2003, historian Thomas Woodward wrote that "Michael Behe assisted in the rewriting of a chapter on biochemistry in a revised edition of Pandas. The book stands as one of the milestones in the infancy of Design." On Access Research Network [3 February 1999] Behe posted "Molecular Machines: Experimental Support for the Design Inference" with a note that "This paper was originally presented in the Summer of 1994 at the meeting of the C. S. Lewis Society, Cambridge University." An "Irreducible Complexity" section quoted Darwin, then discussed "the humble mousetrap", and "Molecular Machines", going into detail about cilia before saying "Other examples of irreducible complexity abound, including aspects of protein transport, blood clotting, closed circular DNA, electron transport, the bacterial flagellum, telomeres, photosynthesis, transcription regulation, and much more. Examples of irreducible complexity can be found on virtually every page of a biochemistry textbook." Suggesting "these things cannot be explained by Darwinian evolution," he said they had been neglected by the scientific community. Behe first published the term "irreducible complexity" in his 1996 book Darwin's Black Box, where he set out his ideas about theoretical properties of some complex biochemical cellular systems, now including the bacterial flagellum. He posits that evolutionary mechanisms cannot explain the development of such "irreducibly complex" systems. Notably, Behe credits philosopher William Paley for the original concept (alone among the predecessors). Intelligent design advocates argue that irreducibly complex systems must have been deliberately engineered by some form of intelligence. In 2001, Michael Behe wrote: "[T]here is an asymmetry between my current definition of irreducible complexity and the task facing natural selection. I hope to repair this defect in future work." Behe specifically explained that the "current definition puts the focus on removing a part from an already functioning system", but the "difficult task facing Darwinian evolution, however, would not be to remove parts from sophisticated pre-existing systems; it would be to bring together components to make a new system in the first place". In the 2005 Kitzmiller v. Dover Area School District trial, Behe testified under oath that he "did not judge [the asymmetry] serious enough to [have revised the book] yet." Behe additionally testified that the presence of irreducible complexity in organisms would not rule out the involvement of evolutionary mechanisms in the development of organic life. He further testified that he knew of no earlier "peer reviewed articles in scientific journals discussing the intelligent design of the blood clotting cascade," but that there were "probably a large number of peer reviewed articles in science journals that demonstrate that the blood clotting system is indeed a purposeful arrangement of parts of great complexity and sophistication." (The judge ruled that "intelligent design is not science and is essentially religious in nature".) According to the theory of evolution, genetic variations occur without specific design or intent. The environment "selects" the variants that have the highest fitness, which are then passed on to the next generation of organisms. Change occurs by the gradual operation of natural forces over time, perhaps slowly, perhaps more quickly (see punctuated equilibrium). This process is able to adapt complex structures from simpler beginnings, or convert complex structures from one function to another (see spandrel). Most intelligent design advocates accept that evolution occurs through mutation and natural selection at the "micro level", such as changing the relative frequency of various beak lengths in finches, but assert that it cannot account for irreducible complexity, because none of the parts of an irreducible system would be functional or advantageous until the entire system is in place. Behe uses the mousetrap as an illustrative example of this concept. A mousetrap consists of five interacting pieces: the base, the catch, the spring, the hammer, and the hold-down bar. All of these must be in place for the mousetrap to work, as the removal of any one piece destroys the function of the mousetrap. Likewise, he asserts that biological systems require multiple parts working together in order to function. Intelligent design advocates claim that natural selection could not create from scratch those systems for which science is currently unable to find a viable evolutionary pathway of successive, slight modifications, because the selectable function is only present when all parts are assembled. In his 2008 book Only A Theory, biologist Kenneth R. Miller challenges Behe's claim that the mousetrap is irreducibly complex. Miller observes that various subsets of the five components can be devised to form cooperative units, ones that have different functions from the mousetrap and so, in biological terms, could form functional spandrels before being adapted to the new function of catching mice. In an example taken from his high school experience, Miller recalls that one of his classmates ...struck upon the brilliant idea of using an old, broken mousetrap as a spitball catapult, and it worked brilliantly.... It had worked perfectly as something other than a mousetrap.... my rowdy friend had pulled a couple of parts --probably the hold-down bar and catch-- off the trap to make it easier to conceal and more effective as a catapult... [leaving] the base, the spring, and the hammer. Not much of a mousetrap, but a helluva spitball launcher.... I realized why [Behe's] mousetrap analogy had bothered me. It was wrong. The mousetrap is not irreducibly complex after all. Other systems identified by Miller that include mousetrap components include the following: The point of the reduction is that—in biology—most or all of the components were already at hand, by the time it became necessary to build a mousetrap. As such, it required far fewer steps to develop a mousetrap than to design all the components from scratch. Thus, the development of the mousetrap, said to consist of five different parts which had no function on their own, has been reduced to one step: the assembly from parts that are already present, performing other functions. Supporters of intelligent design argue that anything less than the complete form of such a system or organ would not work at all, or would in fact be a detriment to the organism, and would therefore never survive the process of natural selection. Although they accept that some complex systems and organs can be explained by evolution, they claim that organs and biological features which are irreducibly complex cannot be explained by current models, and that an intelligent designer must have created life or guided its evolution. Accordingly, the debate on irreducible complexity concerns two questions: whether irreducible complexity can be found in nature, and what significance it would have if it did exist in nature. Behe's original examples of irreducibly complex mechanisms included the bacterial flagellum of E. coli, the blood clotting cascade, cilia, and the adaptive immune system. Behe argues that organs and biological features which are irreducibly complex cannot be wholly explained by current models of evolution. In explicating his definition of "irreducible complexity" he notes that: An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. Irreducible complexity is not an argument that evolution does not occur, but rather an argument that it is "incomplete". In the last chapter of Darwin's Black Box, Behe goes on to explain his view that irreducible complexity is evidence for intelligent design. Mainstream critics, however, argue that irreducible complexity, as defined by Behe, can be generated by known evolutionary mechanisms. Behe's claim that no scientific literature adequately modeled the origins of biochemical systems through evolutionary mechanisms has been challenged by TalkOrigins. The judge in the Dover trial wrote "By defining irreducible complexity in the way that he has, Professor Behe attempts to exclude the phenomenon of exaptation by definitional fiat, ignoring as he does so abundant evidence which refutes his argument. Notably, the NAS has rejected Professor Behe's claim for irreducible complexity..." Behe and others have suggested a number of biological features that they believed to be irreducibly complex. The process of blood clotting or coagulation cascade in vertebrates is a complex biological pathway which is given as an example of apparent irreducible complexity. The irreducible complexity argument assumes that the necessary parts of a system have always been necessary, and therefore could not have been added sequentially. However, in evolution, something which is at first merely advantageous can later become necessary. Natural selection can lead to complex biochemical systems being built up from simpler systems, or to existing functional systems being recombined as a new system with a different function. For example, one of the clotting factors that Behe listed as a part of the clotting cascade (Factor XII, also called Hageman factor) was later found to be absent in whales, demonstrating that it is not essential for a clotting system. Many purportedly irreducible structures can be found in other organisms as much simpler systems that utilize fewer parts. These systems, in turn, may have had even simpler precursors that are now extinct. Behe has responded to critics of his clotting cascade arguments by suggesting that homology is evidence for evolution, but not for natural selection. The "improbability argument" also misrepresents natural selection. It is correct to say that a set of simultaneous mutations that form a complex protein structure is so unlikely as to be unfeasible, but that is not what Darwin advocated. His explanation is based on small accumulated changes that take place without a final goal. Each step must be advantageous in its own right, although biologists may not yet understand the reason behind all of them—for example, jawless fish accomplish blood clotting with just six proteins instead of the full ten. The eye is frequently cited by intelligent design and creationism advocates as a purported example of irreducible complexity. Behe used the "development of the eye problem" as evidence for intelligent design in Darwin's Black Box. Although Behe acknowledged that the evolution of the larger anatomical features of the eye have been well-explained, he pointed out that the complexity of the minute biochemical reactions required at a molecular level for light sensitivity still defies explanation. Creationist Jonathan Sarfati has described the eye as evolutionary biologists' "greatest challenge as an example of superb 'irreducible complexity' in God's creation", specifically pointing to the supposed "vast complexity" required for transparency. In an often misquoted passage from On the Origin of Species, Charles Darwin appears to acknowledge the eye's development as a difficulty for his theory. However, the quote in context shows that Darwin actually had a very good understanding of the evolution of the eye (see fallacy of quoting out of context). He notes that "to suppose that the eye ... could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree". Yet this observation was merely a rhetorical device for Darwin. He goes on to explain that if gradual evolution of the eye could be shown to be possible, "the difficulty of believing that a perfect and complex eye could be formed by natural selection ... can hardly be considered real". He then proceeded to roughly map out a likely course for evolution using examples of gradually more complex eyes of various species. Since Darwin's day, the eye's ancestry has become much better understood. Although learning about the construction of ancient eyes through fossil evidence is problematic due to the soft tissues leaving no imprint or remains, genetic and comparative anatomical evidence has increasingly supported the idea of a common ancestry for all eyes. Current evidence does suggest possible evolutionary lineages for the origins of the anatomical features of the eye. One likely chain of development is that the eyes originated as simple patches of photoreceptor cells that could detect the presence or absence of light, but not its direction. When, via random mutation across the population, the photosensitive cells happened to have developed on a small depression, it endowed the organism with a better sense of the light's source. This small change gave the organism an advantage over those without the mutation. This genetic trait would then be "selected for" as those with the trait would have an increased chance of survival, and therefore progeny, over those without the trait. Individuals with deeper depressions would be able to discern changes in light over a wider field than those individuals with shallower depressions. As ever deeper depressions were advantageous to the organism, gradually, this depression would become a pit into which light would strike certain cells depending on its angle. The organism slowly gained increasingly precise visual information. And again, this gradual process continued as individuals having a slightly shrunken aperture of the eye had an advantage over those without the mutation as an aperture increases how collimated the light is at any one specific group of photoreceptors. As this trait developed, the eye became effectively a pinhole camera which allowed the organism to dimly make out shapes—the nautilus is a modern example of an animal with such an eye. Finally, via this same selection process, a protective layer of transparent cells over the aperture was differentiated into a crude lens, and the interior of the eye was filled with humours to assist in focusing images. In this way, eyes are recognized by modern biologists as actually a relatively unambiguous and simple structure to evolve, and many of the major developments of the eye's evolution are believed to have taken place over only a few million years, during the Cambrian explosion. Behe asserts that this is only an explanation of the gross anatomical steps, however, and not an explanation of the changes in discrete biochemical systems that would have needed to take place. Behe maintains that the complexity of light sensitivity at the molecular level and the minute biochemical reactions required for those first "simple patches of photoreceptor[s]" still defies explanation, and that the proposed series of infinitesimal steps to get from patches of photoreceptors to a fully functional eye would actually be considered great, complex leaps in evolution if viewed on the molecular scale. Other intelligent design proponents claim that the evolution of the entire visual system would be difficult rather than the eye alone. The flagella of certain bacteria constitute a molecular motor requiring the interaction of about 40 different protein parts. The flagellum (or cilium) developed from the pre-existing components of the eukaryotic cytoskeleton. In bacterial flagella, strong evidence points to an evolutionary pathway from a Type III secretory system, a simpler bacterial secretion system. Despite this, Behe presents this as a prime example of an irreducibly complex structure defined as "a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning", and argues that since "an irreducibly complex system that is missing a part is by definition nonfunctional", it could not have evolved gradually through natural selection. However, each of the three types of flagella—eukaryotic, bacterial, and archaeal—has been shown to have evolutionary pathways. For archaeal flagella, there is a molecular homology with bacterial Type IV pili, pointing to an evolutionary link. In all these cases, intermediary, simpler forms of the structures are possible and provide partial functionality. Reducible complexity. In contrast to Behe's claims, many proteins can be deleted or mutated and the flagellum still works, even though sometimes at reduced efficiency. In fact, the composition of flagella is surprisingly diverse across bacteria with many proteins only found in some species but not others. Hence the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. Further studies have shown that, contrary to claims of "irreducible complexity", flagella and the type-III secretion system share several components which provides strong evidence of a shared evolutionary history (see below). In fact, this example shows how a complex system can evolve from simpler components. Multiple processes were involved in the evolution of the flagellum, including horizontal gene transfer. Evolution from type three secretion systems. The basal body of the flagella has been found to be similar to the Type III secretion system (TTSS), a needle-like structure that pathogenic germs such as Salmonella and Yersinia pestis use to inject toxins into living eukaryote cells. The needle's base has ten elements in common with the flagellum, but it is missing forty of the proteins that make a flagellum work. The TTSS system negates Behe's claim that taking away any one of the flagellum's parts would prevent the system from functioning. On this basis, Kenneth Miller notes that, "The parts of this supposedly irreducibly complex system actually have functions of their own." Studies have also shown that similar parts of the flagellum in different bacterial species can have different functions despite showing evidence of common descent, and that certain parts of the flagellum can be removed without completely eliminating its functionality. Behe responded to Miller by asking "why doesn’t he just take an appropriate bacterial species, knock out the genes for its flagellum, place the bacterium under selective pressure (for mobility, say), and experimentally produce a flagellum — or any equally complex system — in the laboratory?" However a laboratory experiement has been performed where "immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway", concluding that "natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps". Dembski has argued that phylogenetically, the TTSS is found in a narrow range of bacteria which makes it seem to him to be a late innovation, whereas flagella are widespread throughout many bacterial groups, and he argues that it was an early innovation. Against Dembski's argument, different flagella use completely different mechanisms, and publications show a plausible path in which bacterial flagella could have evolved from a secretion system. The cilium construction of axoneme microtubules movement by the sliding of dynein protein was cited by Behe as an example of irreducible complexity. He further said that the advances in knowledge in the subsequent 10 years had shown that the complexity of intraflagellar transport for two hundred components cilium and many other cellular structures is substantially greater than was known earlier. The bombardier beetle is able to defend itself by directing a spray of hot fluid at an attacker. The mechanism involves a system for mixing hydroquinones and hydrogen peroxide, which react violently to attain a temperature near boiling point, and in some species a nozzle which allows the spray to be directed accurately in any direction. Creationists have long used the bombardier beetle as a challenge to evolution since the days of Duane Gish and Robert Kofahl in the 1960s and 1970s. The combination of features of the bombardier beetle's defense mechanism—strongly exothermic reactions, boiling-hot fluids, and explosive release—has been claimed to be an example of irreducible complexity. Biologists such as the taxonomist Mark Isaak note however that step-by-step evolution of the mechanism could readily have occurred. In particular, quinones are precursors to sclerotin, used to harden the skeleton of many insects, while peroxide is a common by-product of metabolism. Like intelligent design, the concept it seeks to support, irreducible complexity has failed to gain any notable acceptance within the scientific community. Researchers have proposed potentially viable evolutionary pathways for allegedly irreducibly complex systems such as blood clotting, the immune system and the flagellum—the three examples Behe proposed. John H. McDonald even showed his example of a mousetrap to be reducible. If irreducible complexity is an insurmountable obstacle to evolution, it should not be possible to conceive of such pathways. Niall Shanks and Karl H. Joplin, both of East Tennessee State University, have shown that systems satisfying Behe's characterization of irreducible biochemical complexity can arise naturally and spontaneously as the result of self-organizing chemical processes. They also assert that what evolved biochemical and molecular systems actually exhibit is "redundant complexity"—a kind of complexity that is the product of an evolved biochemical process. They claim that Behe overestimated the significance of irreducible complexity because of his simple, linear view of biochemical reactions, resulting in his taking snapshots of selective features of biological systems, structures, and processes, while ignoring the redundant complexity of the context in which those features are naturally embedded. They also criticized his over-reliance on overly simplistic metaphors, such as his mousetrap. A computer model of the co-evolution of proteins binding to DNA in the peer-reviewed journal Nucleic Acids Research consisted of several parts (DNA binders and DNA binding sites) which contribute to the basic function; removal of either one leads immediately to the death of the organism. This model fits the definition of irreducible complexity exactly, yet it evolves. (The program can be run from Ev program.) One can compare a mousetrap with a cat in this context. Both normally function so as to control the mouse population. The cat has many parts that can be removed leaving it still functional; for example, its tail can be bobbed, or it can lose an ear in a fight. Comparing the cat and the mousetrap, then, one sees that the mousetrap (which is not alive) offers better evidence, in terms of irreducible complexity, for intelligent design than the cat. Even looking at the mousetrap analogy, several critics have described ways in which the parts of the mousetrap could have independent uses or could develop in stages, demonstrating that it is not irreducibly complex. Moreover, even cases where removing a certain component in an organic system will cause the system to fail do not demonstrate that the system could not have been formed in a step-by-step, evolutionary process. By analogy, stone arches are irreducibly complex—if you remove any stone the arch will collapse—yet humans build them easily enough, one stone at a time, by building over centering that is removed afterward. Similarly, naturally occurring arches of stone form by the weathering away of bits of stone from a large concretion that has formed previously. Evolution can act to simplify as well as to complicate. This raises the possibility that seemingly irreducibly complex biological features may have been achieved with a period of increasing complexity, followed by a period of simplification. A team led by Joseph Thornton, assistant professor of biology at the University of Oregon's Center for Ecology and Evolutionary Biology, using techniques for resurrecting ancient genes, reconstructed the evolution of an apparently irreducibly complex molecular system. The April 7, 2006 issue of Science published this research. Irreducible complexity may not actually exist in nature, and the examples given by Behe and others may not in fact represent irreducible complexity, but can be explained in terms of simpler precursors. The theory of facilitated variation challenges irreducible complexity. Marc W. Kirschner, a professor and chair of Department of Systems Biology at Harvard Medical School, and John C. Gerhart, a professor in Molecular and Cell Biology, University of California, Berkeley, presented this theory in 2005. They describe how certain mutation and changes can cause apparent irreducible complexity. Thus, seemingly irreducibly complex structures are merely "very complex", or they are simply misunderstood or misrepresented. The precursors of complex systems, when they are not useful in themselves, may be useful to perform other, unrelated functions. Evolutionary biologists argue that evolution often works in this kind of blind, haphazard manner in which the function of an early form is not necessarily the same as the function of the later form. The term used for this process is exaptation. The mammalian middle ear (derived from a jawbone) and the panda's thumb (derived from a wrist bone spur) provide classic examples. A 2006 article in Nature demonstrates intermediate states leading toward the development of the ear in a Devonian fish (about 360 million years ago). Furthermore, recent research shows that viruses play a heretofore unexpected role in evolution by mixing and matching genes from various hosts. Arguments for irreducibility often assume that things started out the same way they ended up—as we see them now. However, that may not necessarily be the case. In the Dover trial an expert witness for the plaintiffs, Ken Miller, demonstrated this possibility using Behe's mousetrap analogy. By removing several parts, Miller made the object unusable as a mousetrap, but he pointed out that it was now a perfectly functional, if unstylish, tie clip. Irreducible complexity can be seen as equivalent to an "uncrossable valley" in a fitness landscape. A number of mathematical models of evolution have explored the circumstances under which such valleys can, nevertheless, be crossed. An example of a structure that is claimed in Dembski's book No Free Lunch to be irreducibly complex, but evidently has evolved, is the protein T-urf13, which is responsible for the cytoplasmic male sterility of waxy corn and is due to a completely new gene. It arose from the fusion of several non-protein-coding fragments of mitochondrial DNA and the occurrence of several mutations, all of which were necessary. Behe's book Darwin Devolves claims that things like this would take billions of years and could not arise from random tinkering, but the corn was bred during the 20th century. When presented with T-urf13 as an example for the evolvability of irreducibly complex systems, the Discovery Institute resorted to its flawed probability argument based on false premises, akin to the Texas sharpshooter fallacy. Some critics, such as Jerry Coyne (professor of evolutionary biology at the University of Chicago) and Eugenie Scott (a physical anthropologist and former executive director of the National Center for Science Education) have argued that the concept of irreducible complexity and, more generally, intelligent design is not falsifiable and, therefore, not scientific. Behe argues that the theory that irreducibly complex systems could not have evolved can be falsified by an experiment where such systems are evolved. For example, he posits taking bacteria with no flagellum and imposing a selective pressure for mobility. If, after a few thousand generations, the bacteria evolved the bacterial flagellum, then Behe believes that this would refute his theory. This has been done: a laboratory experiement has been performed where "immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway", concluding that "natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps". Other critics take a different approach, pointing to experimental evidence that they consider falsification of the argument for intelligent design from irreducible complexity. For example, Kenneth Miller describes the lab work of Barry G. Hall on E. coli as showing that "Behe is wrong". Other evidence that irreducible complexity is not a problem for evolution comes from the field of computer science, which routinely uses computer analogues of the processes of evolution in order to automatically design complex solutions to problems. The results of such genetic algorithms are frequently irreducibly complex since the process, like evolution, both removes non-essential components over time as well as adding new components. The removal of unused components with no essential function, like the natural process where rock underneath a natural arch is removed, can produce irreducibly complex structures without requiring the intervention of a designer. Researchers applying these algorithms automatically produce human-competitive designs—but no human designer is required. Intelligent design proponents attribute to an intelligent designer those biological structures they believe are irreducibly complex and therefore they say a natural explanation is insufficient to account for them. However, critics view irreducible complexity as a special case of the "complexity indicates design" claim, and thus see it as an argument from ignorance and as a God-of-the-gaps argument. Eugenie Scott and Glenn Branch of the National Center for Science Education note that intelligent design arguments from irreducible complexity rest on the false assumption that a lack of knowledge of a natural explanation allows intelligent design proponents to assume an intelligent cause, when the proper response of scientists would be to say that we don't know, and further investigation is needed. Other critics describe Behe as saying that evolutionary explanations are not detailed enough to meet his standards, while at the same time presenting intelligent design as exempt from having to provide any positive evidence at all. Irreducible complexity is at its core an argument against evolution. If truly irreducible systems are found, the argument goes, then intelligent design must be the correct explanation for their existence. However, this conclusion is based on the assumption that current evolutionary theory and intelligent design are the only two valid models to explain life, a false dilemma. At the 2005 Kitzmiller v. Dover Area School District trial, expert witness testimony defending ID and IC was given by Behe and Scott Minnich, who had been one of the "Johnson-Behe cadre of scholars" at Pajaro Dunes in 1993, was prominent in ID, and was now a tenured associate professor in microbiology at the University of Idaho. Behe conceded that there are no peer-reviewed papers supporting his claims that complex molecular systems, like the bacterial flagellum, the blood-clotting cascade, and the immune system, were intelligently designed nor are there any peer-reviewed articles supporting his argument that certain complex molecular structures are "irreducibly complex." There was extensive discussion of IC arguments about the bacterial flagellum, first published in Behe's 1996 book, and when Minnich was asked if similar claims in a 1994 Creation Research Society article presented the same argument, Minnich said he didn't have any problem with that statement. In the final ruling of Kitzmiller v. Dover Area School District, Judge Jones specifically singled out irreducible complexity:
[ { "paragraph_id": 0, "text": "Irreducible complexity (IC) is the argument that certain biological systems with multiple interacting parts would not function if one of the parts were removed, so supposedly could not have evolved by successive small modifications from earlier less complex systems through natural selection, which would need all intermediate precursor systems to have been fully functional. This negative argument is then complemented by the claim that the only alternative explanation is a \"purposeful arrangement of parts\" inferring design by an intelligent agent. Irreducible complexity has become central to the creationist concept of intelligent design (ID), but the concept of irreducible complexity has been rejected by the scientific community, which regards intelligent design as pseudoscience. Irreducible complexity and specified complexity, are the two main arguments used by intelligent-design proponents to support their version of the theological argument from design.", "title": "" }, { "paragraph_id": 1, "text": "The central concept, of biological complexity too improbable to have evolved by chance natural processes, was already featured in creation science. The 1989 school textbook Of Pandas and People introduced the alternative terminology of intelligent design, the 1993 edition was revised to include a variation of the same argument: it was later shown that these revisions were written by Michael Behe, a professor of biochemistry at Lehigh University.", "title": "" }, { "paragraph_id": 2, "text": "Behe introduced the expression irreducible complexity along with a full account of his arguments in his 1996 book Darwin's Black Box, and he said it made evolution through natural selection of random mutations impossible, or extremely improbable. This was based on the mistaken assumption that evolution relies on improvement of existing functions, ignoring how complex adaptations originate from changes in function, and disregarding published research. Evolutionary biologists have published rebuttals showing how systems discussed by Behe can evolve, and examples documented through comparative genomics show that complex molecular systems are formed by the addition of components as revealed by different temporal origins of their proteins.", "title": "" }, { "paragraph_id": 3, "text": "In the 2005 Kitzmiller v. Dover Area School District trial, Behe gave testimony on the subject of irreducible complexity. The court found that \"Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large.\"", "title": "" }, { "paragraph_id": 4, "text": "Michael Behe defined irreducible complexity in natural selection in terms of well-matched parts in his 1996 book Darwin's Black Box:", "title": "Definitions" }, { "paragraph_id": 5, "text": "... a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning.", "title": "Definitions" }, { "paragraph_id": 6, "text": "A second definition given by Behe in 2000 (his \"evolutionary definition\") states:", "title": "Definitions" }, { "paragraph_id": 7, "text": "An irreducibly complex evolutionary pathway is one that contains one or more unselected steps (that is, one or more necessary-but-unselected mutations). The degree of irreducible complexity is the number of unselected steps in the pathway.", "title": "Definitions" }, { "paragraph_id": 8, "text": "Intelligent-design advocate William A. Dembski assumed an \"original function\" in his 2002 definition:", "title": "Definitions" }, { "paragraph_id": 9, "text": "A system performing a given basic function is irreducibly complex if it includes a set of well-matched, mutually interacting, nonarbitrarily individuated parts such that each part in the set is indispensable to maintaining the system's basic, and therefore original, function. The set of these indispensable parts is known as the irreducible core of the system.", "title": "Definitions" }, { "paragraph_id": 10, "text": "The argument from irreducible complexity is a descendant of the teleological argument for God (the argument from design or from complexity). This states that complex functionality in the natural world which looks designed is evidence of an intelligent creator. William Paley famously argued, in his 1802 watchmaker analogy, that complexity in nature implies a God for the same reason that the existence of a watch implies the existence of a watchmaker. This argument has a long history, and one can trace it back at least as far as Cicero's De Natura Deorum ii.34, written in 45 BC.", "title": "History" }, { "paragraph_id": 11, "text": "Galen (1st and 2nd centuries AD) wrote about the large number of parts of the body and their relationships, which observation was cited as evidence for creation. The idea that the interdependence between parts would have implications for the origins of living things was raised by writers starting with Pierre Gassendi in the mid-17th century and by John Wilkins (1614–1672), who wrote (citing Galen), \"Now to imagine, that all these things, according to their several kinds, could be brought into this regular frame and order, to which such an infinite number of Intentions are required, without the contrivance of some wise Agent, must needs be irrational in the highest degree.\" In the late 17th-century, Thomas Burnet referred to \"a multitude of pieces aptly joyn'd\" to argue against the eternity of life. In the early 18th century, Nicolas Malebranche wrote \"An organized body contains an infinity of parts that mutually depend upon one another in relation to particular ends, all of which must be actually formed in order to work as a whole\", arguing in favor of preformation, rather than epigenesis, of the individual; and a similar argument about the origins of the individual was made by other 18th-century students of natural history. In his 1790 book, The Critique of Judgment, Kant is said by Guyer to argue that \"we cannot conceive how a whole that comes into being only gradually from its parts can nevertheless be the cause of the properties of those parts\".", "title": "History" }, { "paragraph_id": 12, "text": "Chapter XV of Paley's Natural Theology discusses at length what he called \"relations\" of parts of living things as an indication of their design.", "title": "History" }, { "paragraph_id": 13, "text": "Georges Cuvier applied his principle of the correlation of parts to describe an animal from fragmentary remains. For Cuvier, this related to another principle of his, the conditions of existence, which excluded the possibility of transmutation of species.", "title": "History" }, { "paragraph_id": 14, "text": "While he did not originate the term, Charles Darwin identified the argument as a possible way to falsify a prediction of the theory of evolution at the outset. In The Origin of Species (1859), he wrote, \"If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down. But I can find out no such case.\" Darwin's theory of evolution challenges the teleological argument by postulating an alternative explanation to that of an intelligent designer—namely, evolution by natural selection. By showing how simple unintelligent forces can ratchet up designs of extraordinary complexity without invoking outside design, Darwin showed that an intelligent designer was not the necessary conclusion to draw from complexity in nature. The argument from irreducible complexity attempts to demonstrate that certain biological features cannot be purely the product of Darwinian evolution.", "title": "History" }, { "paragraph_id": 15, "text": "In the late 19th century, in a dispute between supporters of the adequacy of natural selection and those who held for inheritance of acquired characteristics, one of the arguments made repeatedly by Herbert Spencer, and followed by others, depended on what Spencer referred to as co-adaptation of co-operative parts, as in:", "title": "History" }, { "paragraph_id": 16, "text": "\"We come now to Professor Weismann's endeavour to disprove my second thesis — that it is impossible to explain by natural selection alone the co-adaptation of co-operative parts. It is thirty years since this was set forth in 'The Principles of Biology.' In § 166, I instanced the enormous horns of the extinct Irish elk, and contended that in this and in kindred cases, where for the efficient use of some one enlarged part many other parts have to be simultaneously enlarged, it is out of the question to suppose that they can have all spontaneously varied in the required proportions.\"", "title": "History" }, { "paragraph_id": 17, "text": "Darwin responded to Spencer's objections in chapter XXV of The Variation of Animals and Plants Under Domestication (1868). The history of this concept in the dispute has been characterized: \"An older and more religious tradition of idealist thinkers were committed to the explanation of complex adaptive contrivances by intelligent design. ... Another line of thinkers, unified by the recurrent publications of Herbert Spencer, also saw co-adaptation as a composed, irreducible whole, but sought to explain it by the inheritance of acquired characteristics.\"", "title": "History" }, { "paragraph_id": 18, "text": "St. George Jackson Mivart raised the objection to natural selection that \"Complex and simultaneous co-ordinations ... until so far developed as to effect the requisite junctions, are useless\". In the 2012 book Evolution and Belief, Confessions of a Religious Paleontologist, Robert J. Asher said this \"amounts to the concept of 'irreducible complexity' as defined by ... Michael Behe\".", "title": "History" }, { "paragraph_id": 19, "text": "Hermann Muller, in the early 20th century, discussed a concept similar to irreducible complexity. However, far from seeing this as a problem for evolution, he described the \"interlocking\" of biological features as a consequence to be expected of evolution, which would lead to irreversibility of some evolutionary changes. He wrote, \"Being thus finally woven, as it were, into the most intimate fabric of the organism, the once novel character can no longer be withdrawn with impunity, and may have become vitally necessary.\"", "title": "History" }, { "paragraph_id": 20, "text": "In 1975 Thomas H. Frazzetta published a book-length study of a concept similar to irreducible complexity, explained by gradual, step-wise, non-teleological evolution. Frazzetta wrote:", "title": "History" }, { "paragraph_id": 21, "text": "\"A complex adaptation is one constructed of several components that must blend together operationally to make the adaptation 'work'. It is analogous to a machine whose performance depends upon careful cooperation among its parts. In the case of the machine, no single part can greatly be altered without changing the performance of the entire machine.\"", "title": "History" }, { "paragraph_id": 22, "text": "The machine that he chose as an analog is the Peaucellier–Lipkin linkage, and one biological system given extended description was the jaw apparatus of a python. The conclusion of this investigation, rather than that evolution of a complex adaptation was impossible, \"awed by the adaptations of living things, to be stunned by their complexity and suitability\", was \"to accept the inescapable but not humiliating fact that much of mankind can be seen in a tree or a lizard.\"", "title": "History" }, { "paragraph_id": 23, "text": "In 1985 Cairns-Smith wrote of \"interlocking\": \"How can a complex collaboration between components evolve in small steps?\" and used the analogy of the scaffolding called centering – used to build an arch then removed afterwards: \"Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean together they had to lean on something else.\" However, neither Muller or Cairns-Smith claimed their ideas as evidence of something supernatural.", "title": "History" }, { "paragraph_id": 24, "text": "An early concept of irreducibly complex systems comes from Ludwig von Bertalanffy (1901–1972), an Austrian biologist. He believed that complex systems must be examined as complete, irreducible systems in order to fully understand how they work. He extended his work on biological complexity into a general theory of systems in a book titled General Systems Theory.", "title": "History" }, { "paragraph_id": 25, "text": "After James Watson and Francis Crick published the structure of DNA in the early 1950s, General Systems Theory lost many of its adherents in the physical and biological sciences. However, systems theory remained popular in the social sciences long after its demise in the physical and biological sciences.", "title": "History" }, { "paragraph_id": 26, "text": "Versions of the irreducible complexity argument have been common in young Earth creationist (YEC) creation science journals. For example, in the July 1965 issue of Creation Research Society Quarterly Harold W. Clark argued that the complex interaction of yucca moths with the plants they fertilize would not function if it was incomplete, so could not have evolved; \"The whole procedure points so strongly to intelligent design that it is difficult to escape the conclusion that the hand of a wise and beneficent creator has been involved.\"", "title": "History" }, { "paragraph_id": 27, "text": "In 1974 the YEC Henry M. Morris introduced an irreducible complexity concept in his creation science book Scientific Creationism, in which he wrote; \"The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident.\" He continued; \"This issue can actually be attacked quantitatively, using simple principles of mathematical probability. The problem is simply whether a complex system, in which many components function unitedly together, and in which each component is uniquely necessary to the efficient functioning of the whole, could ever arise by random processes.\" In 1975 Duane Gish wrote in The Amazing Story of Creation from Science and the Bible; \"The creationist maintains that the degree of complexity and order which science has discovered in the universe could never be generated by chance or accident.\"", "title": "History" }, { "paragraph_id": 28, "text": "A 1980 article in the creation science magazine Creation by the YEC Ariel A. Roth said \"Creation and various other views can be supported by the scientific data that reveal that the spontaneous origin of the complex integrated biochemical systems of even the simplest organisms is, at best, a most improbable event\". In 1981, defending the creation science position in the trial McLean v. Arkansas, Roth said of \"complex integrated structures\": \"This system would not be functional until all the parts were there ... How did these parts survive during evolution ...?\"", "title": "History" }, { "paragraph_id": 29, "text": "In 1985, countering the creationist claims that all the changes would be needed at once, Cairns-Smith wrote of \"interlocking\": \"How can a complex collaboration between components evolve in small steps?\" and used the analogy of the scaffolding called centering – used to build an arch then removed afterwards: \"Surely there was 'scaffolding'. Before the multitudinous components of present biochemistry could come to lean together they had to lean on something else.\" Neither Muller or Cairns-Smith said their ideas were evidence of anything supernatural.", "title": "History" }, { "paragraph_id": 30, "text": "The bacterial flagellum featured in creation science literature. Morris later claimed that one of their Institute for Creation Research \"scientists (the late Dr. Dick Bliss) was using this example in his talks on creation a generation ago\". In December 1992 the creation science magazine Creation called bacterial flagella \"rotary engines\", and dismissed the possibility that these \"incredibly complicated arrangements of matter\" could have \"evolved by selection of chance mutations. The alternative explanation, that they were created, is much more reasonable.\" An article in the Creation Research Society Magazine for June 1994 called a flagellum a \"bacterial nanomachine\", forming the \"bacterial rotor-flagellar complex\" where \"it is clear from the details of their operation that nothing about them works unless every one of their complexly fashioned and integrated components are in place\", hard to explain by natural selection. The abstract said that in \"terms of biophysical complexity, the bacterial rotor-flagellum is without precedent in the living world. .... To evolutionists, the system presents an enigma; to creationists, if offers clear and compelling evidence of purposeful intelligent design.\"", "title": "History" }, { "paragraph_id": 31, "text": "The biology supplementary textbook for schools Of Pandas and People was drafted presenting creation science arguments, but shortly after the Edwards v. Aguillard ruling, that it was unconstitutional to teach creationism in public school science classes, the authors changed the wording to \"intelligent design\", introducing the new meaning of this term when the book was published in 1989. In a separate response to the same ruling, law professor Phillip E. Johnson wrote Darwin on Trial, published in 1991, and at a conference in March 1992 brought together key figures in what he later called the 'wedge movement', including biochemistry professor Michael Behe. According to Johnson, around 1992 Behe developed his ideas of what he later called his \"irreducible complexity\" concept, and first presented these ideas in June 1993 when the \"Johnson-Behe cadre of scholars\" met at Pajaro Dunes in California.", "title": "History" }, { "paragraph_id": 32, "text": "The second edition of Of Pandas and People, published in 1993, had extensive revisions to Chapter 6 Biochemical Similarities with new sections on the complex mechanism of blood clotting and on the origin of proteins. Behe was not named as their author, but in Doubts About Darwin: A History of Intelligent Design, published in 2003, historian Thomas Woodward wrote that \"Michael Behe assisted in the rewriting of a chapter on biochemistry in a revised edition of Pandas. The book stands as one of the milestones in the infancy of Design.\"", "title": "History" }, { "paragraph_id": 33, "text": "On Access Research Network [3 February 1999] Behe posted \"Molecular Machines: Experimental Support for the Design Inference\" with a note that \"This paper was originally presented in the Summer of 1994 at the meeting of the C. S. Lewis Society, Cambridge University.\" An \"Irreducible Complexity\" section quoted Darwin, then discussed \"the humble mousetrap\", and \"Molecular Machines\", going into detail about cilia before saying \"Other examples of irreducible complexity abound, including aspects of protein transport, blood clotting, closed circular DNA, electron transport, the bacterial flagellum, telomeres, photosynthesis, transcription regulation, and much more. Examples of irreducible complexity can be found on virtually every page of a biochemistry textbook.\" Suggesting \"these things cannot be explained by Darwinian evolution,\" he said they had been neglected by the scientific community.", "title": "History" }, { "paragraph_id": 34, "text": "Behe first published the term \"irreducible complexity\" in his 1996 book Darwin's Black Box, where he set out his ideas about theoretical properties of some complex biochemical cellular systems, now including the bacterial flagellum. He posits that evolutionary mechanisms cannot explain the development of such \"irreducibly complex\" systems. Notably, Behe credits philosopher William Paley for the original concept (alone among the predecessors).", "title": "History" }, { "paragraph_id": 35, "text": "Intelligent design advocates argue that irreducibly complex systems must have been deliberately engineered by some form of intelligence.", "title": "History" }, { "paragraph_id": 36, "text": "In 2001, Michael Behe wrote: \"[T]here is an asymmetry between my current definition of irreducible complexity and the task facing natural selection. I hope to repair this defect in future work.\" Behe specifically explained that the \"current definition puts the focus on removing a part from an already functioning system\", but the \"difficult task facing Darwinian evolution, however, would not be to remove parts from sophisticated pre-existing systems; it would be to bring together components to make a new system in the first place\". In the 2005 Kitzmiller v. Dover Area School District trial, Behe testified under oath that he \"did not judge [the asymmetry] serious enough to [have revised the book] yet.\"", "title": "History" }, { "paragraph_id": 37, "text": "Behe additionally testified that the presence of irreducible complexity in organisms would not rule out the involvement of evolutionary mechanisms in the development of organic life. He further testified that he knew of no earlier \"peer reviewed articles in scientific journals discussing the intelligent design of the blood clotting cascade,\" but that there were \"probably a large number of peer reviewed articles in science journals that demonstrate that the blood clotting system is indeed a purposeful arrangement of parts of great complexity and sophistication.\" (The judge ruled that \"intelligent design is not science and is essentially religious in nature\".)", "title": "History" }, { "paragraph_id": 38, "text": "According to the theory of evolution, genetic variations occur without specific design or intent. The environment \"selects\" the variants that have the highest fitness, which are then passed on to the next generation of organisms. Change occurs by the gradual operation of natural forces over time, perhaps slowly, perhaps more quickly (see punctuated equilibrium). This process is able to adapt complex structures from simpler beginnings, or convert complex structures from one function to another (see spandrel). Most intelligent design advocates accept that evolution occurs through mutation and natural selection at the \"micro level\", such as changing the relative frequency of various beak lengths in finches, but assert that it cannot account for irreducible complexity, because none of the parts of an irreducible system would be functional or advantageous until the entire system is in place.", "title": "History" }, { "paragraph_id": 39, "text": "Behe uses the mousetrap as an illustrative example of this concept. A mousetrap consists of five interacting pieces: the base, the catch, the spring, the hammer, and the hold-down bar. All of these must be in place for the mousetrap to work, as the removal of any one piece destroys the function of the mousetrap. Likewise, he asserts that biological systems require multiple parts working together in order to function. Intelligent design advocates claim that natural selection could not create from scratch those systems for which science is currently unable to find a viable evolutionary pathway of successive, slight modifications, because the selectable function is only present when all parts are assembled.", "title": "History" }, { "paragraph_id": 40, "text": "In his 2008 book Only A Theory, biologist Kenneth R. Miller challenges Behe's claim that the mousetrap is irreducibly complex. Miller observes that various subsets of the five components can be devised to form cooperative units, ones that have different functions from the mousetrap and so, in biological terms, could form functional spandrels before being adapted to the new function of catching mice. In an example taken from his high school experience, Miller recalls that one of his classmates", "title": "History" }, { "paragraph_id": 41, "text": "...struck upon the brilliant idea of using an old, broken mousetrap as a spitball catapult, and it worked brilliantly.... It had worked perfectly as something other than a mousetrap.... my rowdy friend had pulled a couple of parts --probably the hold-down bar and catch-- off the trap to make it easier to conceal and more effective as a catapult... [leaving] the base, the spring, and the hammer. Not much of a mousetrap, but a helluva spitball launcher.... I realized why [Behe's] mousetrap analogy had bothered me. It was wrong. The mousetrap is not irreducibly complex after all.", "title": "History" }, { "paragraph_id": 42, "text": "Other systems identified by Miller that include mousetrap components include the following:", "title": "History" }, { "paragraph_id": 43, "text": "The point of the reduction is that—in biology—most or all of the components were already at hand, by the time it became necessary to build a mousetrap. As such, it required far fewer steps to develop a mousetrap than to design all the components from scratch.", "title": "History" }, { "paragraph_id": 44, "text": "Thus, the development of the mousetrap, said to consist of five different parts which had no function on their own, has been reduced to one step: the assembly from parts that are already present, performing other functions.", "title": "History" }, { "paragraph_id": 45, "text": "Supporters of intelligent design argue that anything less than the complete form of such a system or organ would not work at all, or would in fact be a detriment to the organism, and would therefore never survive the process of natural selection. Although they accept that some complex systems and organs can be explained by evolution, they claim that organs and biological features which are irreducibly complex cannot be explained by current models, and that an intelligent designer must have created life or guided its evolution. Accordingly, the debate on irreducible complexity concerns two questions: whether irreducible complexity can be found in nature, and what significance it would have if it did exist in nature.", "title": "History" }, { "paragraph_id": 46, "text": "Behe's original examples of irreducibly complex mechanisms included the bacterial flagellum of E. coli, the blood clotting cascade, cilia, and the adaptive immune system.", "title": "History" }, { "paragraph_id": 47, "text": "Behe argues that organs and biological features which are irreducibly complex cannot be wholly explained by current models of evolution. In explicating his definition of \"irreducible complexity\" he notes that:", "title": "History" }, { "paragraph_id": 48, "text": "An irreducibly complex system cannot be produced directly (that is, by continuously improving the initial function, which continues to work by the same mechanism) by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional.", "title": "History" }, { "paragraph_id": 49, "text": "Irreducible complexity is not an argument that evolution does not occur, but rather an argument that it is \"incomplete\". In the last chapter of Darwin's Black Box, Behe goes on to explain his view that irreducible complexity is evidence for intelligent design. Mainstream critics, however, argue that irreducible complexity, as defined by Behe, can be generated by known evolutionary mechanisms. Behe's claim that no scientific literature adequately modeled the origins of biochemical systems through evolutionary mechanisms has been challenged by TalkOrigins. The judge in the Dover trial wrote \"By defining irreducible complexity in the way that he has, Professor Behe attempts to exclude the phenomenon of exaptation by definitional fiat, ignoring as he does so abundant evidence which refutes his argument. Notably, the NAS has rejected Professor Behe's claim for irreducible complexity...\"", "title": "History" }, { "paragraph_id": 50, "text": "Behe and others have suggested a number of biological features that they believed to be irreducibly complex.", "title": "Claimed examples" }, { "paragraph_id": 51, "text": "The process of blood clotting or coagulation cascade in vertebrates is a complex biological pathway which is given as an example of apparent irreducible complexity.", "title": "Claimed examples" }, { "paragraph_id": 52, "text": "The irreducible complexity argument assumes that the necessary parts of a system have always been necessary, and therefore could not have been added sequentially. However, in evolution, something which is at first merely advantageous can later become necessary. Natural selection can lead to complex biochemical systems being built up from simpler systems, or to existing functional systems being recombined as a new system with a different function. For example, one of the clotting factors that Behe listed as a part of the clotting cascade (Factor XII, also called Hageman factor) was later found to be absent in whales, demonstrating that it is not essential for a clotting system. Many purportedly irreducible structures can be found in other organisms as much simpler systems that utilize fewer parts. These systems, in turn, may have had even simpler precursors that are now extinct. Behe has responded to critics of his clotting cascade arguments by suggesting that homology is evidence for evolution, but not for natural selection.", "title": "Claimed examples" }, { "paragraph_id": 53, "text": "The \"improbability argument\" also misrepresents natural selection. It is correct to say that a set of simultaneous mutations that form a complex protein structure is so unlikely as to be unfeasible, but that is not what Darwin advocated. His explanation is based on small accumulated changes that take place without a final goal. Each step must be advantageous in its own right, although biologists may not yet understand the reason behind all of them—for example, jawless fish accomplish blood clotting with just six proteins instead of the full ten.", "title": "Claimed examples" }, { "paragraph_id": 54, "text": "The eye is frequently cited by intelligent design and creationism advocates as a purported example of irreducible complexity. Behe used the \"development of the eye problem\" as evidence for intelligent design in Darwin's Black Box. Although Behe acknowledged that the evolution of the larger anatomical features of the eye have been well-explained, he pointed out that the complexity of the minute biochemical reactions required at a molecular level for light sensitivity still defies explanation. Creationist Jonathan Sarfati has described the eye as evolutionary biologists' \"greatest challenge as an example of superb 'irreducible complexity' in God's creation\", specifically pointing to the supposed \"vast complexity\" required for transparency.", "title": "Claimed examples" }, { "paragraph_id": 55, "text": "In an often misquoted passage from On the Origin of Species, Charles Darwin appears to acknowledge the eye's development as a difficulty for his theory. However, the quote in context shows that Darwin actually had a very good understanding of the evolution of the eye (see fallacy of quoting out of context). He notes that \"to suppose that the eye ... could have been formed by natural selection, seems, I freely confess, absurd in the highest possible degree\". Yet this observation was merely a rhetorical device for Darwin. He goes on to explain that if gradual evolution of the eye could be shown to be possible, \"the difficulty of believing that a perfect and complex eye could be formed by natural selection ... can hardly be considered real\". He then proceeded to roughly map out a likely course for evolution using examples of gradually more complex eyes of various species.", "title": "Claimed examples" }, { "paragraph_id": 56, "text": "Since Darwin's day, the eye's ancestry has become much better understood. Although learning about the construction of ancient eyes through fossil evidence is problematic due to the soft tissues leaving no imprint or remains, genetic and comparative anatomical evidence has increasingly supported the idea of a common ancestry for all eyes.", "title": "Claimed examples" }, { "paragraph_id": 57, "text": "Current evidence does suggest possible evolutionary lineages for the origins of the anatomical features of the eye. One likely chain of development is that the eyes originated as simple patches of photoreceptor cells that could detect the presence or absence of light, but not its direction. When, via random mutation across the population, the photosensitive cells happened to have developed on a small depression, it endowed the organism with a better sense of the light's source. This small change gave the organism an advantage over those without the mutation. This genetic trait would then be \"selected for\" as those with the trait would have an increased chance of survival, and therefore progeny, over those without the trait. Individuals with deeper depressions would be able to discern changes in light over a wider field than those individuals with shallower depressions. As ever deeper depressions were advantageous to the organism, gradually, this depression would become a pit into which light would strike certain cells depending on its angle. The organism slowly gained increasingly precise visual information. And again, this gradual process continued as individuals having a slightly shrunken aperture of the eye had an advantage over those without the mutation as an aperture increases how collimated the light is at any one specific group of photoreceptors. As this trait developed, the eye became effectively a pinhole camera which allowed the organism to dimly make out shapes—the nautilus is a modern example of an animal with such an eye. Finally, via this same selection process, a protective layer of transparent cells over the aperture was differentiated into a crude lens, and the interior of the eye was filled with humours to assist in focusing images. In this way, eyes are recognized by modern biologists as actually a relatively unambiguous and simple structure to evolve, and many of the major developments of the eye's evolution are believed to have taken place over only a few million years, during the Cambrian explosion. Behe asserts that this is only an explanation of the gross anatomical steps, however, and not an explanation of the changes in discrete biochemical systems that would have needed to take place.", "title": "Claimed examples" }, { "paragraph_id": 58, "text": "Behe maintains that the complexity of light sensitivity at the molecular level and the minute biochemical reactions required for those first \"simple patches of photoreceptor[s]\" still defies explanation, and that the proposed series of infinitesimal steps to get from patches of photoreceptors to a fully functional eye would actually be considered great, complex leaps in evolution if viewed on the molecular scale. Other intelligent design proponents claim that the evolution of the entire visual system would be difficult rather than the eye alone.", "title": "Claimed examples" }, { "paragraph_id": 59, "text": "The flagella of certain bacteria constitute a molecular motor requiring the interaction of about 40 different protein parts. The flagellum (or cilium) developed from the pre-existing components of the eukaryotic cytoskeleton. In bacterial flagella, strong evidence points to an evolutionary pathway from a Type III secretory system, a simpler bacterial secretion system. Despite this, Behe presents this as a prime example of an irreducibly complex structure defined as \"a single system composed of several well-matched, interacting parts that contribute to the basic function, wherein the removal of any one of the parts causes the system to effectively cease functioning\", and argues that since \"an irreducibly complex system that is missing a part is by definition nonfunctional\", it could not have evolved gradually through natural selection. However, each of the three types of flagella—eukaryotic, bacterial, and archaeal—has been shown to have evolutionary pathways. For archaeal flagella, there is a molecular homology with bacterial Type IV pili, pointing to an evolutionary link. In all these cases, intermediary, simpler forms of the structures are possible and provide partial functionality.", "title": "Claimed examples" }, { "paragraph_id": 60, "text": "Reducible complexity. In contrast to Behe's claims, many proteins can be deleted or mutated and the flagellum still works, even though sometimes at reduced efficiency. In fact, the composition of flagella is surprisingly diverse across bacteria with many proteins only found in some species but not others. Hence the flagellar apparatus is clearly very flexible in evolutionary terms and perfectly able to lose or gain protein components. Further studies have shown that, contrary to claims of \"irreducible complexity\", flagella and the type-III secretion system share several components which provides strong evidence of a shared evolutionary history (see below). In fact, this example shows how a complex system can evolve from simpler components. Multiple processes were involved in the evolution of the flagellum, including horizontal gene transfer.", "title": "Claimed examples" }, { "paragraph_id": 61, "text": "Evolution from type three secretion systems. The basal body of the flagella has been found to be similar to the Type III secretion system (TTSS), a needle-like structure that pathogenic germs such as Salmonella and Yersinia pestis use to inject toxins into living eukaryote cells. The needle's base has ten elements in common with the flagellum, but it is missing forty of the proteins that make a flagellum work. The TTSS system negates Behe's claim that taking away any one of the flagellum's parts would prevent the system from functioning. On this basis, Kenneth Miller notes that, \"The parts of this supposedly irreducibly complex system actually have functions of their own.\" Studies have also shown that similar parts of the flagellum in different bacterial species can have different functions despite showing evidence of common descent, and that certain parts of the flagellum can be removed without completely eliminating its functionality. Behe responded to Miller by asking \"why doesn’t he just take an appropriate bacterial species, knock out the genes for its flagellum, place the bacterium under selective pressure (for mobility, say), and experimentally produce a flagellum — or any equally complex system — in the laboratory?\" However a laboratory experiement has been performed where \"immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway\", concluding that \"natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps\".", "title": "Claimed examples" }, { "paragraph_id": 62, "text": "Dembski has argued that phylogenetically, the TTSS is found in a narrow range of bacteria which makes it seem to him to be a late innovation, whereas flagella are widespread throughout many bacterial groups, and he argues that it was an early innovation. Against Dembski's argument, different flagella use completely different mechanisms, and publications show a plausible path in which bacterial flagella could have evolved from a secretion system.", "title": "Claimed examples" }, { "paragraph_id": 63, "text": "The cilium construction of axoneme microtubules movement by the sliding of dynein protein was cited by Behe as an example of irreducible complexity. He further said that the advances in knowledge in the subsequent 10 years had shown that the complexity of intraflagellar transport for two hundred components cilium and many other cellular structures is substantially greater than was known earlier.", "title": "Claimed examples" }, { "paragraph_id": 64, "text": "The bombardier beetle is able to defend itself by directing a spray of hot fluid at an attacker. The mechanism involves a system for mixing hydroquinones and hydrogen peroxide, which react violently to attain a temperature near boiling point, and in some species a nozzle which allows the spray to be directed accurately in any direction. Creationists have long used the bombardier beetle as a challenge to evolution since the days of Duane Gish and Robert Kofahl in the 1960s and 1970s. The combination of features of the bombardier beetle's defense mechanism—strongly exothermic reactions, boiling-hot fluids, and explosive release—has been claimed to be an example of irreducible complexity.", "title": "Claimed examples" }, { "paragraph_id": 65, "text": "Biologists such as the taxonomist Mark Isaak note however that step-by-step evolution of the mechanism could readily have occurred. In particular, quinones are precursors to sclerotin, used to harden the skeleton of many insects, while peroxide is a common by-product of metabolism.", "title": "Claimed examples" }, { "paragraph_id": 66, "text": "Like intelligent design, the concept it seeks to support, irreducible complexity has failed to gain any notable acceptance within the scientific community.", "title": "Response of the scientific community" }, { "paragraph_id": 67, "text": "Researchers have proposed potentially viable evolutionary pathways for allegedly irreducibly complex systems such as blood clotting, the immune system and the flagellum—the three examples Behe proposed. John H. McDonald even showed his example of a mousetrap to be reducible. If irreducible complexity is an insurmountable obstacle to evolution, it should not be possible to conceive of such pathways.", "title": "Response of the scientific community" }, { "paragraph_id": 68, "text": "Niall Shanks and Karl H. Joplin, both of East Tennessee State University, have shown that systems satisfying Behe's characterization of irreducible biochemical complexity can arise naturally and spontaneously as the result of self-organizing chemical processes. They also assert that what evolved biochemical and molecular systems actually exhibit is \"redundant complexity\"—a kind of complexity that is the product of an evolved biochemical process. They claim that Behe overestimated the significance of irreducible complexity because of his simple, linear view of biochemical reactions, resulting in his taking snapshots of selective features of biological systems, structures, and processes, while ignoring the redundant complexity of the context in which those features are naturally embedded. They also criticized his over-reliance on overly simplistic metaphors, such as his mousetrap.", "title": "Response of the scientific community" }, { "paragraph_id": 69, "text": "A computer model of the co-evolution of proteins binding to DNA in the peer-reviewed journal Nucleic Acids Research consisted of several parts (DNA binders and DNA binding sites) which contribute to the basic function; removal of either one leads immediately to the death of the organism. This model fits the definition of irreducible complexity exactly, yet it evolves. (The program can be run from Ev program.)", "title": "Response of the scientific community" }, { "paragraph_id": 70, "text": "One can compare a mousetrap with a cat in this context. Both normally function so as to control the mouse population. The cat has many parts that can be removed leaving it still functional; for example, its tail can be bobbed, or it can lose an ear in a fight. Comparing the cat and the mousetrap, then, one sees that the mousetrap (which is not alive) offers better evidence, in terms of irreducible complexity, for intelligent design than the cat. Even looking at the mousetrap analogy, several critics have described ways in which the parts of the mousetrap could have independent uses or could develop in stages, demonstrating that it is not irreducibly complex.", "title": "Response of the scientific community" }, { "paragraph_id": 71, "text": "Moreover, even cases where removing a certain component in an organic system will cause the system to fail do not demonstrate that the system could not have been formed in a step-by-step, evolutionary process. By analogy, stone arches are irreducibly complex—if you remove any stone the arch will collapse—yet humans build them easily enough, one stone at a time, by building over centering that is removed afterward. Similarly, naturally occurring arches of stone form by the weathering away of bits of stone from a large concretion that has formed previously.", "title": "Response of the scientific community" }, { "paragraph_id": 72, "text": "Evolution can act to simplify as well as to complicate. This raises the possibility that seemingly irreducibly complex biological features may have been achieved with a period of increasing complexity, followed by a period of simplification.", "title": "Response of the scientific community" }, { "paragraph_id": 73, "text": "A team led by Joseph Thornton, assistant professor of biology at the University of Oregon's Center for Ecology and Evolutionary Biology, using techniques for resurrecting ancient genes, reconstructed the evolution of an apparently irreducibly complex molecular system. The April 7, 2006 issue of Science published this research.", "title": "Response of the scientific community" }, { "paragraph_id": 74, "text": "Irreducible complexity may not actually exist in nature, and the examples given by Behe and others may not in fact represent irreducible complexity, but can be explained in terms of simpler precursors. The theory of facilitated variation challenges irreducible complexity. Marc W. Kirschner, a professor and chair of Department of Systems Biology at Harvard Medical School, and John C. Gerhart, a professor in Molecular and Cell Biology, University of California, Berkeley, presented this theory in 2005. They describe how certain mutation and changes can cause apparent irreducible complexity. Thus, seemingly irreducibly complex structures are merely \"very complex\", or they are simply misunderstood or misrepresented.", "title": "Response of the scientific community" }, { "paragraph_id": 75, "text": "The precursors of complex systems, when they are not useful in themselves, may be useful to perform other, unrelated functions. Evolutionary biologists argue that evolution often works in this kind of blind, haphazard manner in which the function of an early form is not necessarily the same as the function of the later form. The term used for this process is exaptation. The mammalian middle ear (derived from a jawbone) and the panda's thumb (derived from a wrist bone spur) provide classic examples. A 2006 article in Nature demonstrates intermediate states leading toward the development of the ear in a Devonian fish (about 360 million years ago). Furthermore, recent research shows that viruses play a heretofore unexpected role in evolution by mixing and matching genes from various hosts.", "title": "Response of the scientific community" }, { "paragraph_id": 76, "text": "Arguments for irreducibility often assume that things started out the same way they ended up—as we see them now. However, that may not necessarily be the case. In the Dover trial an expert witness for the plaintiffs, Ken Miller, demonstrated this possibility using Behe's mousetrap analogy. By removing several parts, Miller made the object unusable as a mousetrap, but he pointed out that it was now a perfectly functional, if unstylish, tie clip.", "title": "Response of the scientific community" }, { "paragraph_id": 77, "text": "Irreducible complexity can be seen as equivalent to an \"uncrossable valley\" in a fitness landscape. A number of mathematical models of evolution have explored the circumstances under which such valleys can, nevertheless, be crossed.", "title": "Response of the scientific community" }, { "paragraph_id": 78, "text": "An example of a structure that is claimed in Dembski's book No Free Lunch to be irreducibly complex, but evidently has evolved, is the protein T-urf13, which is responsible for the cytoplasmic male sterility of waxy corn and is due to a completely new gene. It arose from the fusion of several non-protein-coding fragments of mitochondrial DNA and the occurrence of several mutations, all of which were necessary. Behe's book Darwin Devolves claims that things like this would take billions of years and could not arise from random tinkering, but the corn was bred during the 20th century. When presented with T-urf13 as an example for the evolvability of irreducibly complex systems, the Discovery Institute resorted to its flawed probability argument based on false premises, akin to the Texas sharpshooter fallacy.", "title": "Response of the scientific community" }, { "paragraph_id": 79, "text": "Some critics, such as Jerry Coyne (professor of evolutionary biology at the University of Chicago) and Eugenie Scott (a physical anthropologist and former executive director of the National Center for Science Education) have argued that the concept of irreducible complexity and, more generally, intelligent design is not falsifiable and, therefore, not scientific.", "title": "Response of the scientific community" }, { "paragraph_id": 80, "text": "Behe argues that the theory that irreducibly complex systems could not have evolved can be falsified by an experiment where such systems are evolved. For example, he posits taking bacteria with no flagellum and imposing a selective pressure for mobility. If, after a few thousand generations, the bacteria evolved the bacterial flagellum, then Behe believes that this would refute his theory. This has been done: a laboratory experiement has been performed where \"immotile strains of the bacterium Pseudomonas fluorescens that lack flagella [...] regained flagella within 96 hours via a two-step evolutionary pathway\", concluding that \"natural selection can rapidly rewire regulatory networks in very few, repeatable mutational steps\".", "title": "Response of the scientific community" }, { "paragraph_id": 81, "text": "Other critics take a different approach, pointing to experimental evidence that they consider falsification of the argument for intelligent design from irreducible complexity. For example, Kenneth Miller describes the lab work of Barry G. Hall on E. coli as showing that \"Behe is wrong\".", "title": "Response of the scientific community" }, { "paragraph_id": 82, "text": "Other evidence that irreducible complexity is not a problem for evolution comes from the field of computer science, which routinely uses computer analogues of the processes of evolution in order to automatically design complex solutions to problems. The results of such genetic algorithms are frequently irreducibly complex since the process, like evolution, both removes non-essential components over time as well as adding new components. The removal of unused components with no essential function, like the natural process where rock underneath a natural arch is removed, can produce irreducibly complex structures without requiring the intervention of a designer. Researchers applying these algorithms automatically produce human-competitive designs—but no human designer is required.", "title": "Response of the scientific community" }, { "paragraph_id": 83, "text": "Intelligent design proponents attribute to an intelligent designer those biological structures they believe are irreducibly complex and therefore they say a natural explanation is insufficient to account for them. However, critics view irreducible complexity as a special case of the \"complexity indicates design\" claim, and thus see it as an argument from ignorance and as a God-of-the-gaps argument.", "title": "Response of the scientific community" }, { "paragraph_id": 84, "text": "Eugenie Scott and Glenn Branch of the National Center for Science Education note that intelligent design arguments from irreducible complexity rest on the false assumption that a lack of knowledge of a natural explanation allows intelligent design proponents to assume an intelligent cause, when the proper response of scientists would be to say that we don't know, and further investigation is needed. Other critics describe Behe as saying that evolutionary explanations are not detailed enough to meet his standards, while at the same time presenting intelligent design as exempt from having to provide any positive evidence at all.", "title": "Response of the scientific community" }, { "paragraph_id": 85, "text": "Irreducible complexity is at its core an argument against evolution. If truly irreducible systems are found, the argument goes, then intelligent design must be the correct explanation for their existence. However, this conclusion is based on the assumption that current evolutionary theory and intelligent design are the only two valid models to explain life, a false dilemma.", "title": "Response of the scientific community" }, { "paragraph_id": 86, "text": "At the 2005 Kitzmiller v. Dover Area School District trial, expert witness testimony defending ID and IC was given by Behe and Scott Minnich, who had been one of the \"Johnson-Behe cadre of scholars\" at Pajaro Dunes in 1993, was prominent in ID, and was now a tenured associate professor in microbiology at the University of Idaho. Behe conceded that there are no peer-reviewed papers supporting his claims that complex molecular systems, like the bacterial flagellum, the blood-clotting cascade, and the immune system, were intelligently designed nor are there any peer-reviewed articles supporting his argument that certain complex molecular structures are \"irreducibly complex.\" There was extensive discussion of IC arguments about the bacterial flagellum, first published in Behe's 1996 book, and when Minnich was asked if similar claims in a 1994 Creation Research Society article presented the same argument, Minnich said he didn't have any problem with that statement.", "title": "In the Dover trial" }, { "paragraph_id": 87, "text": "In the final ruling of Kitzmiller v. Dover Area School District, Judge Jones specifically singled out irreducible complexity:", "title": "In the Dover trial" } ]
Irreducible complexity (IC) is the argument that certain biological systems with multiple interacting parts would not function if one of the parts were removed, so supposedly could not have evolved by successive small modifications from earlier less complex systems through natural selection, which would need all intermediate precursor systems to have been fully functional. This negative argument is then complemented by the claim that the only alternative explanation is a "purposeful arrangement of parts" inferring design by an intelligent agent. Irreducible complexity has become central to the creationist concept of intelligent design (ID), but the concept of irreducible complexity has been rejected by the scientific community, which regards intelligent design as pseudoscience. Irreducible complexity and specified complexity, are the two main arguments used by intelligent-design proponents to support their version of the theological argument from design. The central concept, of biological complexity too improbable to have evolved by chance natural processes, was already featured in creation science. The 1989 school textbook Of Pandas and People introduced the alternative terminology of intelligent design, the 1993 edition was revised to include a variation of the same argument: it was later shown that these revisions were written by Michael Behe, a professor of biochemistry at Lehigh University. Behe introduced the expression irreducible complexity along with a full account of his arguments in his 1996 book Darwin's Black Box, and he said it made evolution through natural selection of random mutations impossible, or extremely improbable. This was based on the mistaken assumption that evolution relies on improvement of existing functions, ignoring how complex adaptations originate from changes in function, and disregarding published research. Evolutionary biologists have published rebuttals showing how systems discussed by Behe can evolve, and examples documented through comparative genomics show that complex molecular systems are formed by the addition of components as revealed by different temporal origins of their proteins. In the 2005 Kitzmiller v. Dover Area School District trial, Behe gave testimony on the subject of irreducible complexity. The court found that "Professor Behe's claim for irreducible complexity has been refuted in peer-reviewed research papers and has been rejected by the scientific community at large."
2001-12-15T01:08:59Z
2023-12-31T07:47:22Z
[ "Template:Npsn", "Template:Reflist", "Template:Cite web", "Template:Intelligent Design", "Template:Cite book", "Template:About", "Template:Main", "Template:Failed verification", "Template:Further", "Template:Citation needed", "Template:Needs update", "Template:Cite journal", "Template:Harvnb", "Template:Sfn", "Template:Cite news", "Template:Citation", "Template:ISBN", "Template:Webarchive", "Template:Short description" ]
https://en.wikipedia.org/wiki/Irreducible_complexity
15,388
Religion in pre-Islamic Arabia
Religion in pre-Islamic Arabia included indigenous Arabian polytheism, ancient Semitic religions, Christianity, Judaism, Mandaeism, and Zoroastrianism. Arabian polytheism, the dominant form of religion in pre-Islamic Arabia, was based on veneration of deities and spirits. Worship was directed to various gods and goddesses, including Hubal and the goddesses al-Lāt, al-‘Uzzā, and Manāt, at local shrines and temples such as the Kaaba in Mecca. Deities were venerated and invoked through a variety of rituals, including pilgrimages and divination, as well as ritual sacrifice. Different theories have been proposed regarding the role of Allah in Meccan religion. Many of the physical descriptions of the pre-Islamic gods are traced to idols, especially near the Kaaba, which is said to have contained up to 360 of them. Other religions were represented to varying, lesser degrees. The influence of the adjacent Roman and Aksumite civilizations resulted in Christian communities in the northwest, northeast, and south of Arabia. Christianity made a lesser impact in the remainder of the peninsula, but did secure some conversions. With the exception of Nestorianism in the northeast and the Persian Gulf, the dominant form of Christianity was Miaphysitism. The peninsula had been a destination for Jewish migration since Roman times, which had resulted in a diaspora community supplemented by local converts. Additionally, the influence of the Sasanian Empire resulted in Iranian religions being present in the peninsula. Zoroastrianism existed in the east and south, while there is evidence of either Manichaeism or Mazdakism being possibly practiced in Mecca. Until about the fourth century, almost all inhabitants of Arabia practiced polytheistic religions. Although significant Jewish and Christian minorities developed, polytheism remained the dominant belief system in pre-Islamic Arabia. The contemporary sources of information regarding the pre-Islamic Arabian religion and pantheon include a small number of inscriptions and carvings, pre-Islamic poetry, external sources such as Jewish and Greek accounts, as well as the Muslim tradition, such as the Qur'an and Islamic writings. Nevertheless, information is limited. One early attestation of Arabian polytheism was in Esarhaddon's Annals, mentioning Atarsamain, Nukhay, Ruldaiu, and Atarquruma. Herodotus, writing in his Histories, reported that the Arabs worshipped Orotalt (identified with Dionysus) and Alilat (identified with Aphrodite). Strabo stated the Arabs worshipped Dionysus and Zeus. Origen stated they worshipped Dionysus and Urania. Muslim sources regarding Arabian polytheism include the eighth-century Book of Idols by Hisham ibn al-Kalbi, which F.E. Peters argued to be the most substantial treatment of the religious practices of pre-Islamic Arabia, as well as the writings of the Yemeni historian al-Hasan al-Hamdani on South Arabian religious beliefs. According to the Book of Idols, descendants of the son of Abraham (Ishmael) who had settled in Mecca migrated to other lands carried holy stones from the Kaaba with them, erected them, and circumambulated them like the Kaaba. This, according to al-Kalbi led to the rise of idol worship. Based on this, it may be probable that Arabs originally venerated stones, later adopting idol-worship under foreign influences. The relationship between a god and a stone as his representation can be seen from the third-century Syriac work called the Homily of Pseudo-Meliton where he describes the pagan faiths of Syriac-speakers in northern Mesopotamia, who were mostly Arabs. According to F. E. Peters, "one of the characteristics of Arab paganism as it has come down to us is the absence of a mythology, narratives that might serve to explain the origin or history of the gods." Many of the deities have epithets, but are lacking myths or narratives to decode the epithets, making them generally uninformative. The pre-Islamic Arabian religions were polytheistic, with many of the deities' names known. Formal pantheons are more noticeable at the level of kingdoms, of variable sizes, ranging from simple city-states to collections of tribes. Tribes, towns, clans, lineages and families had their own cults too. Christian Julien Robin suggests that this structure of the divine world reflected the society of the time. Trade caravans also brought foreign religious and cultural influences. A large number of deities did not have proper names and were referred to by titles indicating a quality, a family relationship, or a locale preceded by "he who" or "she who" (dhū or dhāt respectively). The religious beliefs and practices of the nomadic Bedouin were distinct from those of the settled tribes of towns such as Mecca. Nomadic religious belief systems and practices are believed to have included fetishism, totemism and veneration of the dead but were connected principally with immediate concerns and problems and did not consider larger philosophical questions such as the afterlife. Settled urban Arabs, on the other hand, are thought to have believed in a more complex pantheon of deities. While the Meccans and the other settled inhabitants of the Hejaz worshiped their gods at permanent shrines in towns and oases, the Bedouin practiced their religion on the move. In South Arabia, mndh’t were anonymous guardian spirits of the community and the ancestor spirits of the family. They were known as 'the sun (shms) of their ancestors'. In North Arabia, ginnaye were known from Palmyrene inscriptions as "the good and rewarding gods" and were probably related to the jinn of west and central Arabia. Unlike jinn, ginnaye could not hurt nor possess humans and were much more similar to the Roman genius. According to common Arabian belief, soothsayers, pre-Islamic philosophers, and poets were inspired by the jinn. However, jinn were also feared and thought to be responsible for causing various diseases and mental illnesses. Aside from benevolent gods and spirits, there existed malevolent beings. These beings were not attested in the epigraphic record, but were alluded to in pre-Islamic Arabic poetry, and their legends were collected by later Muslim authors. Commonly mentioned are ghouls. Etymologically, the English word "ghoul" was derived from the Arabic ghul, from ghala, "to seize", related to the Sumerian galla. They are said to have a hideous appearance, with feet like those of an ass. Arabs were said to utter the following couplet if they should encounter one: "Oh ass-footed one, just bray away, we won't leave the desert plain nor ever go astray." Christian Julien Robin notes that all the known South Arabian divinities had a positive or protective role and that evil powers were only alluded to but were never personified. Some scholars postulate that in pre-Islamic Arabia, including in Mecca, Allah was considered to be a deity, possibly a creator deity or a supreme deity in a polytheistic pantheon. The word Allah (from the Arabic al-ilah meaning "the god") may have been used as a title rather than a name. The concept of Allah may have been vague in the Meccan religion. According to Islamic sources, Meccans and their neighbors believed that the goddesses Al-lāt, Al-‘Uzzá, and Manāt were the daughters of Allah. Regional variants of the word Allah occur in both pagan and Christian pre-Islamic inscriptions. References to Allah are found in the poetry of the pre-Islamic Arab poet Zuhayr bin Abi Sulma, who lived a generation before Muhammad, as well as pre-Islamic personal names. Muhammad's father's name was ʿAbd-Allāh, meaning "the servant of Allah". Charles Russell Coulter and Patricia Turner considered that Allah's name may be derived from a pre-Islamic god called Ailiah and is similar to El, Il, Ilah, and Jehovah. They also considered some of his characteristics to be seemingly based on lunar deities like Almaqah, Kahl, Shaker, Wadd and Warakh. Alfred Guillaume states that the connection between Ilah that came to form Allah and ancient Babylonian Il or El of ancient Israel is not clear. Wellhausen states that Allah was known from Jewish and Christian sources and was known to pagan Arabs as the supreme god. Winfried Corduan doubts the theory of Allah of Islam being linked to a moon god, stating that the term Allah functions as a generic term, like the term El-Elyon used as a title for the god Sin. South Arabian inscriptions from the fourth century AD refer to a god called Rahman ("The Merciful One") who had a monotheistic cult and was referred to as the "Lord of heaven and Earth". Aaron W. Hughes states that scholars are unsure whether he developed from the earlier polytheistic systems or developed due to the increasing significance of the Christian and Jewish communities, and that it is difficult to establish whether Allah was linked to Rahmanan. Maxime Rodinson, however, considers one of Allah's names, "Ar-Rahman", to have been used in the form of Rahmanan earlier. Al-Lāt, Al-‘Uzzá and Manāt were common names used for multiple goddesses across Arabia. G. R. Hawting states that modern scholars have frequently associated the names of Arabian goddesses Al-lāt, Al-‘Uzzá and Manāt with cults devoted to celestial bodies, particularly Venus, drawing upon evidence external to the Muslim tradition as well as in relation to Syria, Mesopotamia and the Sinai Peninsula. Allāt (Arabic: اللات) or al-Lāt was worshipped throughout the ancient Near East with various associations. Herodotus in the 5th century BC identifies Alilat (Greek: Ἀλιλάτ) as the Arabic name for Aphrodite (and, in another passage, for Urania), which is strong evidence for worship of Allāt in Arabia at that early date. Al-‘Uzzá (Arabic: العزى) was a fertility goddess or possibly a goddess of love. Manāt (Arabic: مناة) was the goddess of destiny. Al-Lāt's cult was spread in Syria and northern Arabia. From Safaitic and Hismaic inscriptions, it is probable that she was worshiped as Lat (lt). F. V. Winnet saw al-Lat as a lunar deity due to the association of a crescent with her in 'Ayn esh-Shallāleh and a Lihyanite inscription mentioning the name of Wadd, the Minaean moon god, over the title of fkl lt. René Dussaud and Gonzague Ryckmans linked her with Venus while others have thought her to be a solar deity. John F. Healey considers that al-Uzza actually might have been an epithet of al-Lāt before becoming a separate deity in the Meccan pantheon. Paola Corrente, writing in Redefining Dionysus, considers she might have been a god of vegetation or a celestial deity of atmospheric phenomena and a sky deity. The worship of sacred stones constituted one of the most important practices of the Semitic speaking peoples, including Arabs. Cult images of a deity were most often an unworked stone block. The most common name for these stone blocks was derived from the Semitic nsb ("to be stood upright"), but other names were used, such as Nabataean masgida ("place of prostration") and Arabic duwar ("object of circumambulation", this term often occurs in pre-Islamic Arabic poetry). These god-stones were usually a free-standing slab, but Nabataean god-stones are usually carved directly on the rock face. Facial features may be incised on the stone (especially in Nabataea), or astral symbols (especially in South Arabia). Under Greco-Roman influence, an anthropomorphic statue might be used instead. The Book of Idols describes two types of statues: idols (sanam) and images (wathan). If a statue were made of wood, gold, or silver, after a human form, it would be an idol, but if the statue were made of stone, it would be an image. Representation of deities in animal-form was common in South Arabia, such as the god Sayin from Hadhramaut, who was represented as either an eagle fighting a serpent or a bull. Sacred places were known as hima, haram or mahram, and within these places, all living things were considered inviolable and violence was forbidden. In most of Arabia, these places would take the form of open-air sanctuaries, with distinguishing natural features such as springs and forests. Cities would contain temples, enclosing the sacred area with walls, and featuring ornate structures. Sacred areas often had a guardian or a performer of cultic rites. These officials were thought to tend the area, receive offerings, and perform divination. They are known by many names, probably based on cultural-linguistic preference: afkal was used in the Hejaz, kâhin was used in the Sinai-Negev-Hisma region, and kumrâ was used in Aramaic-influenced areas. In South Arabia, rsw and 'fkl were used to refer to priests, and other words include qyn ("administrator") and mrtd ("consecrated to a particular divinity"). A more specialized staff is thought to have existed in major sanctuaries. Pilgrimages to sacred places would be made at certain times of the year. Pilgrim fairs of central and northern Arabia took place in specific months designated as violence-free, allowing several activities to flourish, such as trade, though in some places only exchange was permitted. The most important pilgrimage in Saba' was probably the pilgrimage of Almaqah at Ma'rib, performed in the month of dhu-Abhi (roughly in July). Two references attest the pilgrimage of Almaqah dhu-Hirran at 'Amran.√ The pilgrimage of Ta'lab Riyam took place in Mount Tur'at and the Zabyan temple at Hadaqan, while the pilgrimage of Dhu-Samawi, the god of the Amir tribe, took place in Yathill. Aside from Sabaean pilgrimages, the pilgrimage of Sayin took place at Shabwa. The pilgrimage of Mecca involved the stations of Mount Arafat, Muzdalifah, Mina and central Mecca that included Safa and Marwa as well as the Kaaba. Pilgrims at the first two stations performed wuquf or standing in adoration. At Mina, animals were sacrificed. The procession from Arafat to Muzdalifah, and from Mina to Mecca, in a pre-reserved route towards idols or an idol, was termed ijaza and ifada, with the latter taking place before sunset. At Jabal Quzah, fires were started during the sacred month. Nearby the Kaaba was located the betyl which was later called Maqam Ibrahim; a place called al-Ḥigr which Aziz al-Azmeh takes to be reserved for consecrated animals, basing his argument on a Sabaean inscription mentioning a place called mḥgr which was reserved for animals; and the Well of Zamzam. Both Safa and Marwa were adjacent to two sacrificial hills, one called Muṭ'im al Ṭayr and another Mujāwir al-Riḥ which was a pathway to Abu Kubais from where the Black Stone is reported to have originated. Meccan pilgrimages differed according to the rites of different cult associations, in which individuals and groups joined for religious purposes. The Ḥilla association performed the hajj in autumn season while the Ṭuls and Ḥums performed the umrah in spring. The Ḥums were the Quraysh, Banu Kinanah, Banu Khuza'a and Banu 'Amir. They did not perform the pilgrimage outside the zone of Mecca's haram, thus excluding Mount Arafat. They also developed certain dietary and cultural restrictions. According to Kitab al-Muhabbar, the Ḥilla denoted most of the Banu Tamim, Qays, Rabi`ah, Qūḍa'ah, Ansar, Khath'am, Bajīlah, Banu Bakr ibn Abd Manat, Hudhayl, Asad, Tayy and Bariq. The Ṭuls comprised the tribes of Yemen and Hadramaut, 'Akk, Ujayb and Īyād. The Basl recognised at least eight months of the calendar as holy. There was also another group which did not recognize the sanctity of Mecca's haram or holy months, unlike the other four. The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena. In South Arabia, oracles were regarded as ms’l, or "a place of asking", and that deities interacted by hr’yhw ("making them see") a vision, a dream, or even direct interaction. Otherwise deities interacted indirectly through a medium. There were three methods of chance-based divination attested in pre-Islamic Arabia; two of these methods, making marks in the sand or on rocks and throwing pebbles are poorly attested. The other method, the practice of randomly selecting an arrow with instructions, was widely attested and was common throughout Arabia. A simple form of this practice was reportedly performed before the image of Dhu'l-Khalasa by a certain man, sometimes said to be the Kindite poet Imru al-Qays according to al-Kalbi. A more elaborate form of the ritual was performed in before the image of Hubal. This form of divination was also attested in Palmyra, evidenced by an honorific inscription in the temple of al-Lat. The most common offerings were animals, crops, food, liquids, inscribed metal plaques or stone tablets, aromatics, edifices and manufactured objects. Camel-herding Arabs would devote some of their beasts to certain deities. The beasts would have their ears slit and would be left to pasture without a herdsman, allowing them to die a natural death. Pre-Islamic Arabians, especially pastoralist tribes, sacrificed animals as an offering to a deity. This type of offering was common and involved domestic animals such as camels, sheep and cattle, while game animals and poultry were rarely or never mentioned. Sacrifice rites were not tied to a particular location though they were usually practiced in sacred places. Sacrifice rites could be performed by the devotee, though according to Hoyland, women were probably not allowed. The victim's blood, according to pre-Islamic Arabic poetry and certain South Arabian inscriptions, was also 'poured out' on the altar stone, thus forming a bond between the human and the deity. According to Muslim sources, most sacrifices were concluded with communal feasts. In South Arabia, beginning with the Christian era, or perhaps a short while before, statuettes were presented before the deity, known as slm (male) or slmt (female). Human sacrifice was sometimes carried out in Arabia. The victims were generally prisoners of war, who represented the god's part of the victory in booty, although other forms might have existed. Blood sacrifice was definitely practiced in South Arabia, but few allusions to the practice are known, apart from some Minaean inscriptions. In the Hejaz, menstruating women were not allowed to be near the cult images. The area where Isaf and Na'ila's images stood was considered out-of-bounds for menstruating women. This was reportedly the same with Manaf. According to the Book of Idols, this rule applied to all the "idols". This was also the case in South Arabia, as attested in a South Arabian inscription from al-Jawf. Sexual intercourse in temples was prohibited, as attested in two South Arabian inscriptions. One legend concerning Isaf and Na'ila, when two lovers made love in the Kaaba and were petrified, joining the idols in the Kaaba, echoes this prohibition. The Dilmun civilization, which existed along the Persian Gulf coast and Bahrain until the 6th century BC, worshipped a pair of deities, Inzak and Meskilak. It is not known whether these were the only deities in the pantheon or whether there were others. The discovery of wells at the sites of a Dilmun temple and a shrine suggests that sweet water played an important part in religious practices. In the subsequent Greco-Roman period, there is evidence that the worship of non-indigenous deities was brought to the region by merchants and visitors. These included Bel, a god popular in the Syrian city of Palmyra, the Mesopotamian deities Nabu and Shamash, the Greek deities Poseidon and Artemis and the west Arabian deities Kahl and Manat. The main sources of religious information in pre-Islamic South Arabia are inscriptions, which number in the thousands, as well as the Quran, complemented by archaeological evidence. The civilizations of South Arabia are considered to have the most developed pantheon in the Arabian peninsula. In South Arabia, the most common god was 'Athtar, who was considered remote. The patron deity (shym) was considered to be of much more immediate significance than 'Athtar. Thus, the kingdom of Saba' had Almaqah, the kingdom of Ma'in had Wadd, the kingdom of Qataban had 'Amm, and the kingdom of Hadhramaut had Sayin. Each people was termed the "children" of their respective patron deity. Patron deities played a vital role in sociopolitical terms, their cults serving as the focus of a person's cohesion and loyalty. Evidence from surviving inscriptions suggests that each of the southern kingdoms had its own pantheon of three to five deities, the major deity always being a god. For example, the pantheon of Saba comprised Almaqah, the major deity, together with 'Athtar, Haubas, Dhat-Himyam, and Dhat-Badan. The main god in Ma'in and Himyar was 'Athtar, in Qataban it was Amm, and in Hadhramaut it was Sayin. 'Amm was a lunar deity and was associated with the weather, especially lightning. One of the most frequent titles of the god Almaqah was "Lord of Awwam". Anbay was an oracular god of Qataban and also the spokesman of Amm. His name was invoked in royal regulations regarding water supply. Anbay's name was related to that of the Babylonian deity Nabu. Hawkam was invoked alongside Anbay as god of "command and decision" and his name is derived from the root word "to be wise". Each kingdom's central temple was the focus of worship for the main god and would be the destination for an annual pilgrimage, with regional temples dedicated to a local manifestation of the main god. Other beings worshipped included local deities or deities dedicated to specific functions as well as deified ancestors. The encroachment of northern Arab tribes into South Arabia also introduced northern Arab deities into the region. The three goddesses al-Lat, al-Uzza and Manat became known as Lat/Latan, Uzzayan and Manawt. Uzzayan's cult in particular was widespread in South Arabia, and in Qataban she was invoked as a guardian of the final royal palace. Lat/Latan was not significant in South Arabia, but appears to be popular with the Arab tribes bordering Yemen. Other Arab deities include Dhu-Samawi, a god originally worshipped by the Amir tribe, and Kahilan, perhaps related to Kahl of Qaryat al-Faw. Bordering Yemen, the Azd Sârat tribe of the Asir region was said to have worshipped Dhu'l-Shara, Dhu'l-Kaffayn, Dhu'l-Khalasa and A'im. According to the Book of Idols, Dhu'l-Kaffayn originated from a clan of the Banu Daws. In addition to being worshipped among the Azd, Dushara is also reported to have a shrine amongst the Daws. Dhu’l-Khalasa was an oracular god and was also worshipped by the Bajila and Khatham tribes. Before conversion to Christianity, the Aksumites followed a polytheistic religion that was similar to that of Southern Arabia. The lunar god Hawbas was worshiped in South Arabia and Aksum. The name of the god Astar, a sky-deity was related to that of 'Attar. The god Almaqah was worshiped at Hawulti-Melazo. The South Arabian gods in Aksum included Dhat-Himyam and Dhat-Ba'adan. A stone later reused for the church of Enda-Cerqos at Melazo mentions these gods. Hawbas is also mentioned on an altar and sphinx in Dibdib. The name of Nrw who is mentioned in Aksum inscriptions is related to that of the South Arabian god Nawraw, a deity of stars. The Himyarite kings radically opposed polytheism in favor of Judaism, beginning officially in 380. The last trace of polytheism in South Arabia, an inscription commemorating a construction project with a polytheistic invocation, and another, mentioning the temple of Ta’lab, all date from just after 380 (the former dating to the rule of the king Dhara’amar Ayman, and the latter dating to the year 401–402). The rejection of polytheism from the public sphere did not mean the extinction of it altogether, as polytheism likely continued in the private sphere. The Kinda tribe's chief god was Kahl, whom their capital Qaryat Dhat Kahl (modern Qaryat al-Faw) was named for. His name appears in the form of many inscriptions and rock engravings on the slopes of the Tuwayq, on the walls of the souk of the village, in the residential houses and on the incense burners. An inscription in Qaryat Dhat Kahl invokes the gods Kahl, Athtar al-Shariq and Lah. According to Islamic sources, the Hejaz region was home to three important shrines dedicated to al-Lat, al-’Uzza and Manat. The shrine and idol of al-Lat, according to the Book of Idols, once stood in Ta'if, and was primarily worshipped by the Banu Thaqif tribe. Al-’Uzza's principal shrine was in Nakhla and she was the chief-goddess of the Quraysh tribe. Manāt's idol, reportedly the oldest of the three, was erected on the seashore between Medina and Mecca, and was honored by the Aws and Khazraj tribes. Inhabitants of several areas venerated Manāt, performing sacrifices before her idol, and pilgrimages of some were not considered completed until they visited Manāt and shaved their heads. In the Muzdalifah region near Mecca, the god Quzah, who is a god of rains and storms, was worshipped. In pre-Islamic times pilgrims used to halt at the "hill of Quzah" before sunrise. Qusai ibn Kilab is traditionally reported to have introduced the association of fire worship with him on Muzdalifah. Various other deities were venerated in the area by specific tribes, such as the god Suwa' by the Banu Hudhayl tribe and the god Nuhm by the Muzaynah tribe. The majority of extant information about Mecca during the rise of Islam and earlier times comes from the text of the Quran itself and later Muslim sources such as the prophetic biography literature dealing with the life of Muhammad and the Book of Idols. Alternative sources are so fragmentary and specialized that writing a convincing history of this period based on them alone is impossible. Several scholars hold that the sīra literature is not independent of the Quran but has been fabricated to explain the verses of the Quran. There is evidence to support the contention that some reports of the sīras are of dubious validity, but there is also evidence to support the contention that the sīra narratives originated independently of the Quran. Compounding the problem is that the earliest extant Muslim historical works, including the sīras, were composed in their definitive form more than a century after the beginning of the Islamic era. Some of these works were based on subsequently lost earlier texts, which in their turn recorded a fluid oral tradition. Scholars do not agree as to the time when such oral accounts began to be systematically collected and written down, and they differ greatly in their assessment of the historical reliability of the available texts. The Kaaba, whose environs were regarded as sacred (haram), became a national shrine under the custodianship of the Quraysh, the chief tribe of Mecca, which made the Hejaz the most important religious area in north Arabia. Its role was solidified by a confrontation with the Christian king Abraha, who controlled much of Arabia from a seat of power in Yemen in the middle of the sixth century. Abraha had recently constructed a splendid church in Sana'a, and he wanted to make that city a major center of pilgrimage, but Mecca's Kaaba presented a challenge to his plan. Abraha found a pretext for an attack on Mecca, presented by different sources alternatively as pollution of the church by a tribe allied to the Meccans or as an attack on Abraha's grandson in Najran by a Meccan party. The defeat of the army he assembled to conquer Mecca is recounted with miraculous details by the Islamic tradition and is also alluded to in the Quran and pre-Islamic poetry. After the battle, which probably occurred around 565, the Quraysh became a dominant force in western Arabia, receiving the title "God's people" (ahl Allah) according to Islamic sources, and formed the cult association of ḥums, which tied members of many tribes in western Arabia to the Kaaba. According to tradition, the Kaaba was a cube-like, originally roofless structure housing a black stone revered as a relic. The sanctuary was dedicated to Hubal (Arabic: هبل), who, according to some sources, was worshiped as the greatest of the 360 idols the Kaaba contained, which probably represented the days of the year. Ibn Ishaq and Ibn Al-Kalbi both report that the human-shaped idol of Hubal made of precious stone (agate, according to the Book of Idols) came into the possession of the Quraysh with its right hand broken off and that the Quraysh made a hand of gold to replace it. A soothsayer performed divination in the shrine by drawing ritual arrows, and vows and sacrifices were made to assure success. Marshall Hodgson argues that relations with deities and fetishes in pre-Islamic Mecca were maintained chiefly on the basis of bargaining, where favors were expected in return for offerings. A deity's or oracle's failure to provide the desired response was sometimes met with anger. Different theories have been proposed regarding the role of Allah in Meccan religion. According to one hypothesis, which goes back to Julius Wellhausen, Allah (the supreme deity of the tribal federation around Quraysh) was a designation that consecrated the superiority of Hubal (the supreme deity of Quraysh) over the other gods. However, there is also evidence that Allah and Hubal were two distinct deities. According to that hypothesis, the Kaaba was first consecrated to a supreme deity named Allah and then hosted the pantheon of Quraysh after their conquest of Mecca, about a century before the time of Muhammad. Some inscriptions seem to indicate the use of Allah as a name of a polytheist deity centuries earlier, but we know nothing precise about this use. Some scholars have suggested that Allah may have represented a remote creator god who was gradually eclipsed by more particularized local deities. There is disagreement on whether Allah played a major role in the Meccan religious cult. No iconic representation or idol of Allah is known to have existed. The three chief goddesses of Meccan religion were al-Lat, Al-‘Uzzá, and Manāt, who were called the daughters of Allah. Egerton Sykes meanwhile states that Al-lāt was the female counterpart of Allah while Uzza was a name given by Banu Ghatafan to the planet Venus. Other deities of the Quraysh in Mecca included Manaf, Isaf and Na’ila. Although the early Arab historian Al-Tabari calls Manaf (Arabic: مناف) "one of the greatest deities of Mecca", very little information is available about it. Women touched his idol as a token of blessing, and kept away from it during menstruation. Gonzague Ryckmans described this as a practice peculiar to Manaf, but according to the Encyclopedia of Islam, a report from Ibn Al-Kalbi indicates that it was common to all idols. Muhammad's great-great-grandfather's name was Abd Manaf which means "slave of Manaf". He is thought by some scholars to be a sun-god. The idols of Isāf and Nā'ila were located near the Black Stone with a talbiyah performed to Isāf during sacrifices. Various legends existed about the idols, including one that they were petrified after they committed adultery in the Kaaba. The pantheon of the Quraysh was not identical with that of the tribes who entered into various cult and commercial associations with them, especially that of the hums. Christian Julien Robin argues that the former was composed principally of idols that were in the sanctuary of Mecca, including Hubal and Manaf, while the pantheon of the associations was superimposed on it, and its principal deities included the three goddesses, who had neither idols nor a shrine in that city. The second half of the sixth century was a period of political disorder in Arabia and communication routes were no longer secure. Religious divisions were an important cause of the crisis. Judaism became the dominant religion in Yemen while Christianity took root in the Persian Gulf area. In line with the broader trends of the ancient world, Arabia yearned for a more spiritual form of religion and began believing in afterlife, while the choice of religion increasingly became a personal rather than communal choice. While many were reluctant to convert to a foreign faith, those faiths provided intellectual and spiritual reference points, and the old pagan vocabulary of Arabic began to be replaced by Jewish and Christian loanwords from Aramaic everywhere, including Mecca. The distribution of pagan temples supports Gerald Hawting's argument that Arabian polytheism was marginalized in the region and already dying in Mecca on the eve of Islam. The practice of polytheistic cults was increasingly limited to the steppe and the desert, and in Yathrib (later known as Medina), which included two tribes with polytheistic majorities, the absence of a public pagan temple in the town or its immediate neighborhood indicates that polytheism was confined to the private sphere. Looking at the text of the Quran itself, Hawting has also argued that the criticism of idolaters and polytheists contained in Quran is in fact a hyperbolic reference to other monotheists, in particular the Arab Jews and Arab Christians, whose religious beliefs were considered imperfect. According to some traditions, the Kaaba contained no statues, but its interior was decorated with images of Mary and Jesus, prophets, angels, and trees. To counter the effects of anarchy, the institution of sacred months, during which every act of violence was prohibited, was reestablished. During those months, it was possible to participate in pilgrimages and fairs without danger. The Quraysh upheld the principle of two annual truces, one of one month and the second of three months, which conferred a sacred character to the Meccan sanctuary. The cult association of hums, in which individuals and groups partook in the same rites, was primarily religious, but it also had important economic consequences. Although, as Patricia Crone has shown, Mecca could not compare with the great centers of caravan trade on the eve of Islam, it was probably one of the most prosperous and secure cities of the peninsula, since, unlike many of them, it did not have surrounding walls. Pilgrimage to Mecca was a popular custom. Some Islamic rituals, including processions around the Kaaba and between the hills of al-Safa and Marwa, as well as the salutation "we are here, O Allah, we are here" repeated on approaching the Kaaba are believed to have antedated Islam. Spring water acquired a sacred character in Arabia early on and Islamic sources state that the well of Zamzam became holy long before the Islamic era. According to Ibn Sa'd, the opposition in Mecca started when the prophet of Islam, Muhammad, delivered verses that "spoke shamefully of the idols they (the Meccans) worshiped other than Himself (God) and mentioned the perdition of their fathers who died in disbelief". According to William Montgomery Watt, as the ranks of Muhammad's followers swelled, he became a threat to the local tribes and the rulers of the city, whose wealth rested upon the Kaaba, the focal point of Meccan religious life, which Muhammad threatened to overthrow. Muhammad's denunciation of the Meccan traditional religion was especially offensive to his own tribe, the Quraysh, as they were the guardians of the Kaaba. The conquest of Mecca around 629–630 AD led to the destruction of the idols around the Kaaba, including Hubal. Following the conquest, shrines and temples dedicated to deities were destroyed, such as the shrines to al-Lat, al-’Uzza and Manat in Ta’if, Nakhla and al-Qudayd respectively. Less complex societies outside South Arabia often had smaller pantheons, with the patron deity having much prominence. The deities attested in north Arabian inscriptions include Ruda, Nuha, Allah, Dathan, and Kahl. Inscriptions in a North Arabian dialect in the region of Najd referring to Nuha describe emotions as a gift from him. In addition, they also refer to Ruda being responsible for all things good and bad. The Safaitic tribes in particular prominently worshipped the goddess al-Lat as a bringer of prosperity. The Syrian god Baalshamin was also worshipped by Safaitic tribes and is mentioned in Safaitic inscriptions. Religious worship amongst the Qedarites, an ancient tribal confederation that was probably subsumed into Nabataea around the 2nd century AD, was centered around a polytheistic system in which women rose to prominence. Divine images of the gods and goddesses worshipped by Qedarite Arabs, as noted in Assyrian inscriptions, included representations of Atarsamain, Nuha, Ruda, Dai, Abirillu and Atarquruma. The female guardian of these idols, usually the reigning queen, served as a priestess (apkallatu, in Assyrian texts) who communed with the other world. There is also evidence that the Qedar worshipped al-Lat to whom the inscription on a silver bowl from a king of Qedar is dedicated. In the Babylonian Talmud, which was passed down orally for centuries before being transcribed c. 500 AD, in tractate Taanis (folio 5b), it is said that most Qedarites worshiped pagan gods. The Aramaic stele inscription discovered by Charles Hubert in 1880 at Tayma mentions the introduction of a new god called Salm of hgm into the city's pantheon being permitted by three local gods – Salm of Mahram who was the chief god, Shingala, and Ashira. The name Salm means "image" or "idol". The Midianites, a people referred to in the Book of Genesis and located in north-western Arabia, may have worshipped Yahweh. Indeed, some scholars believe that Yahweh was originally a Midianite god and that he was subsequently adopted by the Israelites. An Egyptian temple of Hathor continued to be used during the Midianite occupation of the site, although images of Hathor were defaced suggesting Midianite opposition. They transformed it into a desert tent-shrine set up with a copper sculpture of a snake. The Lihyanites worshipped the god Dhu-Ghabat and rarely turned to others for their needs. Dhu-Ghabat's name means "he of the thicket", based on the etymology of gabah, meaning forest or thicket. The god al-Kutba', a god of writing probably related to a Babylonian deity and perhaps was brought into the region by the Babylonian king Nabonidus, is mentioned in Lihyanite inscriptions as well. The worship of the Hermonian gods Leucothea and Theandrios was spread from Phoenicia to Arabia. According to the Book of Idols, the Tayy tribe worshipped al-Fals, whose idol stood on Jabal Aja, while the Kalb tribe worshipped Wadd, who had an idol in Dumat al-Jandal. The Nabataeans worshipped primarily northern Arabian deities. Under foreign influences, they also incorporated foreign deities and elements into their beliefs. The Nabataeans' chief-god is Dushara. In Petra, the only major goddess is Al-‘Uzzá, assuming the traits of Isis, Tyche and Aphrodite. It is unknown if her worship and identity is related to her cult at Nakhla and others. The Nabatean inscriptions define Allāt and Al-Uzza as the "bride of Dushara". Al-Uzza may have been an epithet of Allāt in the Nabataean religion according to John F. Healey. Outside Petra, other deities were worshipped; for example, Hubal and Manat were invoked in the Hejaz, and al-Lat was invoked in the Hauran and the Syrian desert. The Nabataean king Obodas I, who founded Obodat, was deified and worshipped as a god. They also worshipped Shay al-Qawm, al-Kutba', and various Greco-Roman deities such as Nike and Tyche. Maxime Rodinson suggests that Hubal, who was popular in Mecca, had a Nabataean origin. The worship of Pakidas, a Nabataean god, is attested at Gerasa alongside Hera in an inscription dated to the first century A.D. while an Arabian god is also attested by three inscriptions dated to the second century. The Nabataeans were known for their elaborate tombs, but they were not just for show; they were meant to be comfortable places for the dead. Petra has many "sacred high places" which include altars that have usually been interpreted as places of human sacrifice, although, since the 1960s, an alternative theory that they are "exposure platforms" for placing the corpses of the deceased as part of a funerary ritual has been put forward. However, there is, in fact, little evidence for either proposition. Palmyra was a cosmopolitan society, with its population being a mix of Aramaeans and Arabs. The Arabs of Palmyra worshipped al-Lat, Rahim and Shamash. The temple of al-Lat was established by the Bene Ma'zin tribe, who were probably an Arab tribe. The nomads of the countryside worshipped a set of deities, bearing Arab names and attributes, most prominent of them was Abgal, who himself is not attested in Palmyra itself. Ma'n, an Arab god, was worshipped alongside Abgal in a temple dedicated in 195 AD at Khirbet Semrin in the Palmyrene region while an inscription dated 194 AD at Ras esh-Shaar calls him the "good and bountiful god". A stele at Ras esh-Shaar shows him riding a horse with a lance while the god Saad is riding a camel. Abgal, Ma'n and Sa'd were known as the genii. The god Ashar was represented on a stele in Dura-Europos alongside another god Sa'd. The former was represented on a horse with Arab dress while the other was shown standing on the ground. Both had Parthian hairstyle, large facial hair and moustaches as well as similar clothing. Ashar's name is found to have been used in a theophoric manner among the Arab-majority areas of the region of the Northwest Semitic languages, like Hatra, where names like "Refuge of Ashar", "Servant of Ashar" and "Ashar has given" are recorded on an inscription. In Edessa, the solar deity was the primary god around the time of the Roman Emperor Julian and this worship was presumably brought in by migrants from Arabia. Julian's oration delivered to the denizens of the city mentioned that they worshipped the Sun surrounded by Azizos and Monimos whom Iamblichus identified with Ares and Hermes respectively. Monimos derived from Mu'nim or "the favourable one", and was another name of Ruda or Ruldaiu as apparent from spellings of his name in Sennacherib's Annals. The idol of the god al-Uqaysir was, according to the Book of Idols, located in Syria, and was worshipped by the tribes of Quda'a, Lakhm, Judham, Amela, and Ghatafan. Adherents would go on a pilgrimage to the idol and shave their heads, then mix their hair with wheat, "for every single hair a handful of wheat". A shrine to Dushara has been discovered in the harbour of ancient Puteoli in Italy. The city was an important nexus for trade to the Near East, and it is known to have had a Nabataean presence during the mid 1st century BCE. A Minaean altar dedicated to Wadd evidently existed in Delos, containing two inscriptions in Minaean and Greek respectively. The Bedouin were introduced to Meccan ritualistic practices as they frequented settled towns of the Hejaz during the four months of the "holy truce", the first three of which were devoted to religious observance, while the fourth was set aside for trade. Alan Jones infers from Bedouin poetry that the gods, even Allah, were less important to the Bedouins than Fate. They seem to have had little trust in rituals and pilgrimages as means of propitiating Fate, but had recourse to divination and soothsayers (kahins). The Bedouins regarded some trees, wells, caves and stones as sacred objects, either as fetishes or as means of reaching a deity. They created sanctuaries where people could worship fetishes. The Bedouins had a code of honor which Fazlur Rahman Malik states may be regarded as their religious ethics. This code encompassed women, bravery, hospitality, honouring one's promises and pacts, and vengeance. They believed that the ghost of a slain person would cry out from the grave until their thirst for blood was quenched. Practices such as killing of infant girls were often regarded as having religious sanction. Numerous mentions of jinn in the Quran and testimony of both pre-Islamic and Islamic literature indicate that the belief in spirits was prominent in pre-Islamic Bedouin religion. However, there is evidence that the word jinn is derived from Aramaic, ginnaye, which was widely attested in Palmyrene inscriptions. The Aramaic word was used by Christians to designate pagan gods reduced to the status of demons, and was introduced into Arabic folklore only late in the pre-Islamic era. Julius Wellhausen has observed that such spirits were thought to inhabit desolate, dingy and dark places and that they were feared. One had to protect oneself from them, but they were not the objects of a true cult. Bedouin religious experience also included an apparently indigenous cult of ancestors. The dead were not regarded as powerful, but rather as deprived of protection and needing charity of the living as a continuation of social obligations beyond the grave. Only certain ancestors, especially heroes from which the tribe was said to derive its name, seem to have been objects of real veneration. Iranian religions existed in pre-Islamic Arabia on account of Sasanian military presence along the Persian Gulf and South Arabia and on account of trade routes between the Hejaz and Iraq. Some Arabs in northeast of the peninsula converted to Zoroastrianism and several Zoroastrian temples were constructed in Najd. Some of the members from the tribe of Banu Tamim had converted to the religion. There is also evidence of existence of Manichaeism in Arabia as several early sources indicate a presence of "zandaqas" in Mecca, although the term could also be interpreted as referring to Mazdakism. However, according to the most recent research by Tardieu, the prevalence of Manichaeism in Mecca during the 6th and 7th centuries, when Islam emerged, can not be proven. Similar reservations regarding the appearance of Manichaeism and Mazdakism in pre-Islamic Mecca are offered by Trompf & Mikkelsen et al. in their latest work (2018). There is evidence for the circulation of Iranian religious ideas in the form of Persian loan words in Quran such as firdaws (paradise). Zoroastrianism was also present in Eastern Arabia and Persian-speaking Zoroastrians lived in the region. The religion was introduced in the region including modern-day Bahrain during the rule of Persian empires in the region starting from 250 B.C. It was mainly practiced in Bahrain by Persian settlers. Zoroastrianism was also practiced in the Persian-ruled area of modern-day Oman. The religion also existed in Persian-ruled area of modern Yemen. The descendants of Abna, the Persian conquerors of Yemen, were followers of Zoroastrianism. Yemen's Zoroastrians who had the jizya imposed on them after being conquered by Muhammad are mentioned by the Islamic historian al-Baladhuri. According to Serjeant, the Baharna people may be the Arabized descendants of converts from the original population of ancient Persians (majus) as well as other religions. A thriving community of Jewish tribes existed in pre-Islamic Arabia and included both sedentary and nomadic communities. Jews had migrated into Arabia from Roman times onwards. Arabian Jews spoke Arabic as well as Hebrew and Aramaic and had contact with Jewish religious centers in Babylonia and Palestine. The Yemeni Himyarites converted to Judaism in the 4th century, and some of the Kinda were also converted in the 4th/5th century. Jewish tribes existed in all major Arabian towns during Muhammad's time including in Tayma and Khaybar as well as Medina with twenty tribes living in the peninsula. From tomb inscriptions, it is visible that Jews also lived in Mada'in Saleh and Al-'Ula. There is evidence that Jewish converts in the Hejaz were regarded as Jews by other Jews, as well as by non-Jews, and sought advice from Babylonian rabbis on matters of attire and kosher food. In at least one case, it is known that an Arab tribe agreed to adopt Judaism as a condition for settling in a town dominated by Jewish inhabitants. Some Arab women in Yathrib/Medina are said to have vowed to make their child a Jew if the child survived, since they considered the Jews to be people "of knowledge and the book" (ʿilmin wa-kitābin). Philip Hitti infers from proper names and agricultural vocabulary that the Jewish tribes of Yathrib consisted mostly of Judaized clans of Arabian and Aramaean origin. The key role played by Jews in the trade and markets of the Hejaz meant that market day for the week was the day preceding the Jewish Sabbath. This day, which was called aruba in Arabic, also provided occasion for legal proceedings and entertainment, which in turn may have influenced the choice of Friday as the day of Muslim congregational prayer. Toward the end of the sixth century, the Jewish communities in the Hejaz were in a state of economic and political decline, but they continued to flourish culturally in and beyond the region. They had developed their distinctive beliefs and practices, with a pronounced mystical and eschatological dimension. In the Islamic tradition, based on a phrase in the Quran, Arab Jews are said to have referred to Uzair as the son of Allah, although the historical accuracy of this assertion has been disputed. Jewish agriculturalists lived in the region of Eastern Arabia. According to Robert Bertram Serjeant, the Baharna may be the Arabized "descendants of converts from Christians (Arameans), Jews and ancient Persians (Majus) inhabiting the island and cultivated coastal provinces of Eastern Arabia at the time of the Arab conquest". From the Islamic sources, it seems that Judaism was the religion most followed in Yemen. Ya'qubi claimed all Yemenites to be Jews; Ibn Hazm however states only Himyarites and some Kindites were Jews. The main areas of Christian influence in Arabia were on the northeastern and northwestern borders and in what was to become Yemen in the south. The north west was under the influence of Christian missionary activity from the Roman Empire where the Ghassanids, a client kingdom of the Romans, were converted to Christianity. In the south, particularly at Najran, a centre of Christianity developed as a result of the influence of the Christian Kingdom of Axum based on the other side of the Red Sea in Ethiopia. Some of the Banu Harith had converted to Christianity. One family of the tribe built a large church at Najran called Deir Najran, also known as the "Ka'ba of Najran". Both the Ghassanids and the Christians in the south adopted Monophysitism. The third area of Christian influence was on the north eastern borders where the Lakhmids, a client tribe of the Sassanians, adopted Nestorianism, being the form of Christianity having the most influence in the Sassanian Empire. As the Persian Gulf region of Arabia increasingly fell under the influence of the Sassanians from the early third century, many of the inhabitants were exposed to Christianity following the eastward dispersal of the religion by Mesopotamian Christians. However, it was not until the fourth century that Christianity gained popularity in the region with the establishment of monasteries and a diocesan structure. In pre-Islamic times, the population of Eastern Arabia consisted of Christianized Arabs (including Abd al-Qays) and Aramean Christians among other religions. Syriac functioned as a liturgical language. Serjeant states that the Baharna may be the Arabized descendants of converts from the original population of Christians (Aramaeans), among other religions at the time of Arab conquests. Beth Qatraye, which translates "region of the Qataris" in Syriac, was the Christian name used for the region encompassing north-eastern Arabia. It included Bahrain, Tarout Island, Al-Khatt, Al-Hasa, and Qatar. Oman and what is today the United Arab Emirates comprised the diocese known as Beth Mazunaye. The name was derived from 'Mazun', the Persian name for Oman and the United Arab Emirates. Sohar was the central city of the diocese. In Nejd, in the centre of the peninsula, there is evidence of members of two tribes, Kinda and Taghlib, converting to Christianity in the 6th century. However, in the Hejaz in the west, whilst there is evidence of the presence of Christianity, it is not thought to have been significant amongst the indigenous population of the area. Arabicized Christian names were fairly common among pre-Islamic Arabians, which has been attributed to the influence that Syrianized Christian Arabs had on Bedouins of the peninsula for several centuries before the rise of Islam. Neal Robinson, based on verses in the Quran, believes that some Arab Christians may have held unorthodox beliefs such as the worshipping of a divine triad of God the father, Jesus the Son and Mary the Mother. Furthermore, there is evidence that unorthodox groups such as the Collyridians, whose adherents worshipped Mary, were present in Arabia, and it has been proposed that the Quran refers to their beliefs. However, other scholars, notably Mircea Eliade, William Montgomery Watt, G. R. Hawting and Sidney H. Griffith, cast doubt on the historicity or reliability of such references in the Quran. Their views are as follows:
[ { "paragraph_id": 0, "text": "Religion in pre-Islamic Arabia included indigenous Arabian polytheism, ancient Semitic religions, Christianity, Judaism, Mandaeism, and Zoroastrianism.", "title": "" }, { "paragraph_id": 1, "text": "Arabian polytheism, the dominant form of religion in pre-Islamic Arabia, was based on veneration of deities and spirits. Worship was directed to various gods and goddesses, including Hubal and the goddesses al-Lāt, al-‘Uzzā, and Manāt, at local shrines and temples such as the Kaaba in Mecca. Deities were venerated and invoked through a variety of rituals, including pilgrimages and divination, as well as ritual sacrifice. Different theories have been proposed regarding the role of Allah in Meccan religion. Many of the physical descriptions of the pre-Islamic gods are traced to idols, especially near the Kaaba, which is said to have contained up to 360 of them.", "title": "" }, { "paragraph_id": 2, "text": "Other religions were represented to varying, lesser degrees. The influence of the adjacent Roman and Aksumite civilizations resulted in Christian communities in the northwest, northeast, and south of Arabia. Christianity made a lesser impact in the remainder of the peninsula, but did secure some conversions. With the exception of Nestorianism in the northeast and the Persian Gulf, the dominant form of Christianity was Miaphysitism. The peninsula had been a destination for Jewish migration since Roman times, which had resulted in a diaspora community supplemented by local converts. Additionally, the influence of the Sasanian Empire resulted in Iranian religions being present in the peninsula. Zoroastrianism existed in the east and south, while there is evidence of either Manichaeism or Mazdakism being possibly practiced in Mecca.", "title": "" }, { "paragraph_id": 3, "text": "Until about the fourth century, almost all inhabitants of Arabia practiced polytheistic religions. Although significant Jewish and Christian minorities developed, polytheism remained the dominant belief system in pre-Islamic Arabia.", "title": "Background and sources" }, { "paragraph_id": 4, "text": "The contemporary sources of information regarding the pre-Islamic Arabian religion and pantheon include a small number of inscriptions and carvings, pre-Islamic poetry, external sources such as Jewish and Greek accounts, as well as the Muslim tradition, such as the Qur'an and Islamic writings. Nevertheless, information is limited.", "title": "Background and sources" }, { "paragraph_id": 5, "text": "One early attestation of Arabian polytheism was in Esarhaddon's Annals, mentioning Atarsamain, Nukhay, Ruldaiu, and Atarquruma. Herodotus, writing in his Histories, reported that the Arabs worshipped Orotalt (identified with Dionysus) and Alilat (identified with Aphrodite). Strabo stated the Arabs worshipped Dionysus and Zeus. Origen stated they worshipped Dionysus and Urania.", "title": "Background and sources" }, { "paragraph_id": 6, "text": "Muslim sources regarding Arabian polytheism include the eighth-century Book of Idols by Hisham ibn al-Kalbi, which F.E. Peters argued to be the most substantial treatment of the religious practices of pre-Islamic Arabia, as well as the writings of the Yemeni historian al-Hasan al-Hamdani on South Arabian religious beliefs.", "title": "Background and sources" }, { "paragraph_id": 7, "text": "According to the Book of Idols, descendants of the son of Abraham (Ishmael) who had settled in Mecca migrated to other lands carried holy stones from the Kaaba with them, erected them, and circumambulated them like the Kaaba. This, according to al-Kalbi led to the rise of idol worship. Based on this, it may be probable that Arabs originally venerated stones, later adopting idol-worship under foreign influences. The relationship between a god and a stone as his representation can be seen from the third-century Syriac work called the Homily of Pseudo-Meliton where he describes the pagan faiths of Syriac-speakers in northern Mesopotamia, who were mostly Arabs.", "title": "Background and sources" }, { "paragraph_id": 8, "text": "According to F. E. Peters, \"one of the characteristics of Arab paganism as it has come down to us is the absence of a mythology, narratives that might serve to explain the origin or history of the gods.\" Many of the deities have epithets, but are lacking myths or narratives to decode the epithets, making them generally uninformative.", "title": "Background and sources" }, { "paragraph_id": 9, "text": "The pre-Islamic Arabian religions were polytheistic, with many of the deities' names known. Formal pantheons are more noticeable at the level of kingdoms, of variable sizes, ranging from simple city-states to collections of tribes. Tribes, towns, clans, lineages and families had their own cults too. Christian Julien Robin suggests that this structure of the divine world reflected the society of the time. Trade caravans also brought foreign religious and cultural influences.", "title": "Worship" }, { "paragraph_id": 10, "text": "A large number of deities did not have proper names and were referred to by titles indicating a quality, a family relationship, or a locale preceded by \"he who\" or \"she who\" (dhū or dhāt respectively).", "title": "Worship" }, { "paragraph_id": 11, "text": "The religious beliefs and practices of the nomadic Bedouin were distinct from those of the settled tribes of towns such as Mecca. Nomadic religious belief systems and practices are believed to have included fetishism, totemism and veneration of the dead but were connected principally with immediate concerns and problems and did not consider larger philosophical questions such as the afterlife. Settled urban Arabs, on the other hand, are thought to have believed in a more complex pantheon of deities. While the Meccans and the other settled inhabitants of the Hejaz worshiped their gods at permanent shrines in towns and oases, the Bedouin practiced their religion on the move.", "title": "Worship" }, { "paragraph_id": 12, "text": "In South Arabia, mndh’t were anonymous guardian spirits of the community and the ancestor spirits of the family. They were known as 'the sun (shms) of their ancestors'.", "title": "Worship" }, { "paragraph_id": 13, "text": "In North Arabia, ginnaye were known from Palmyrene inscriptions as \"the good and rewarding gods\" and were probably related to the jinn of west and central Arabia. Unlike jinn, ginnaye could not hurt nor possess humans and were much more similar to the Roman genius. According to common Arabian belief, soothsayers, pre-Islamic philosophers, and poets were inspired by the jinn. However, jinn were also feared and thought to be responsible for causing various diseases and mental illnesses.", "title": "Worship" }, { "paragraph_id": 14, "text": "Aside from benevolent gods and spirits, there existed malevolent beings. These beings were not attested in the epigraphic record, but were alluded to in pre-Islamic Arabic poetry, and their legends were collected by later Muslim authors.", "title": "Worship" }, { "paragraph_id": 15, "text": "Commonly mentioned are ghouls. Etymologically, the English word \"ghoul\" was derived from the Arabic ghul, from ghala, \"to seize\", related to the Sumerian galla. They are said to have a hideous appearance, with feet like those of an ass. Arabs were said to utter the following couplet if they should encounter one: \"Oh ass-footed one, just bray away, we won't leave the desert plain nor ever go astray.\"", "title": "Worship" }, { "paragraph_id": 16, "text": "Christian Julien Robin notes that all the known South Arabian divinities had a positive or protective role and that evil powers were only alluded to but were never personified.", "title": "Worship" }, { "paragraph_id": 17, "text": "Some scholars postulate that in pre-Islamic Arabia, including in Mecca, Allah was considered to be a deity, possibly a creator deity or a supreme deity in a polytheistic pantheon. The word Allah (from the Arabic al-ilah meaning \"the god\") may have been used as a title rather than a name. The concept of Allah may have been vague in the Meccan religion. According to Islamic sources, Meccans and their neighbors believed that the goddesses Al-lāt, Al-‘Uzzá, and Manāt were the daughters of Allah.", "title": "Worship" }, { "paragraph_id": 18, "text": "Regional variants of the word Allah occur in both pagan and Christian pre-Islamic inscriptions. References to Allah are found in the poetry of the pre-Islamic Arab poet Zuhayr bin Abi Sulma, who lived a generation before Muhammad, as well as pre-Islamic personal names. Muhammad's father's name was ʿAbd-Allāh, meaning \"the servant of Allah\".", "title": "Worship" }, { "paragraph_id": 19, "text": "Charles Russell Coulter and Patricia Turner considered that Allah's name may be derived from a pre-Islamic god called Ailiah and is similar to El, Il, Ilah, and Jehovah. They also considered some of his characteristics to be seemingly based on lunar deities like Almaqah, Kahl, Shaker, Wadd and Warakh. Alfred Guillaume states that the connection between Ilah that came to form Allah and ancient Babylonian Il or El of ancient Israel is not clear. Wellhausen states that Allah was known from Jewish and Christian sources and was known to pagan Arabs as the supreme god. Winfried Corduan doubts the theory of Allah of Islam being linked to a moon god, stating that the term Allah functions as a generic term, like the term El-Elyon used as a title for the god Sin.", "title": "Worship" }, { "paragraph_id": 20, "text": "South Arabian inscriptions from the fourth century AD refer to a god called Rahman (\"The Merciful One\") who had a monotheistic cult and was referred to as the \"Lord of heaven and Earth\". Aaron W. Hughes states that scholars are unsure whether he developed from the earlier polytheistic systems or developed due to the increasing significance of the Christian and Jewish communities, and that it is difficult to establish whether Allah was linked to Rahmanan. Maxime Rodinson, however, considers one of Allah's names, \"Ar-Rahman\", to have been used in the form of Rahmanan earlier.", "title": "Worship" }, { "paragraph_id": 21, "text": "Al-Lāt, Al-‘Uzzá and Manāt were common names used for multiple goddesses across Arabia. G. R. Hawting states that modern scholars have frequently associated the names of Arabian goddesses Al-lāt, Al-‘Uzzá and Manāt with cults devoted to celestial bodies, particularly Venus, drawing upon evidence external to the Muslim tradition as well as in relation to Syria, Mesopotamia and the Sinai Peninsula.", "title": "Worship" }, { "paragraph_id": 22, "text": "Allāt (Arabic: اللات) or al-Lāt was worshipped throughout the ancient Near East with various associations. Herodotus in the 5th century BC identifies Alilat (Greek: Ἀλιλάτ) as the Arabic name for Aphrodite (and, in another passage, for Urania), which is strong evidence for worship of Allāt in Arabia at that early date. Al-‘Uzzá (Arabic: العزى) was a fertility goddess or possibly a goddess of love. Manāt (Arabic: مناة) was the goddess of destiny.", "title": "Worship" }, { "paragraph_id": 23, "text": "Al-Lāt's cult was spread in Syria and northern Arabia. From Safaitic and Hismaic inscriptions, it is probable that she was worshiped as Lat (lt). F. V. Winnet saw al-Lat as a lunar deity due to the association of a crescent with her in 'Ayn esh-Shallāleh and a Lihyanite inscription mentioning the name of Wadd, the Minaean moon god, over the title of fkl lt. René Dussaud and Gonzague Ryckmans linked her with Venus while others have thought her to be a solar deity. John F. Healey considers that al-Uzza actually might have been an epithet of al-Lāt before becoming a separate deity in the Meccan pantheon. Paola Corrente, writing in Redefining Dionysus, considers she might have been a god of vegetation or a celestial deity of atmospheric phenomena and a sky deity.", "title": "Worship" }, { "paragraph_id": 24, "text": "The worship of sacred stones constituted one of the most important practices of the Semitic speaking peoples, including Arabs. Cult images of a deity were most often an unworked stone block. The most common name for these stone blocks was derived from the Semitic nsb (\"to be stood upright\"), but other names were used, such as Nabataean masgida (\"place of prostration\") and Arabic duwar (\"object of circumambulation\", this term often occurs in pre-Islamic Arabic poetry). These god-stones were usually a free-standing slab, but Nabataean god-stones are usually carved directly on the rock face. Facial features may be incised on the stone (especially in Nabataea), or astral symbols (especially in South Arabia). Under Greco-Roman influence, an anthropomorphic statue might be used instead.", "title": "Practices" }, { "paragraph_id": 25, "text": "The Book of Idols describes two types of statues: idols (sanam) and images (wathan). If a statue were made of wood, gold, or silver, after a human form, it would be an idol, but if the statue were made of stone, it would be an image.", "title": "Practices" }, { "paragraph_id": 26, "text": "Representation of deities in animal-form was common in South Arabia, such as the god Sayin from Hadhramaut, who was represented as either an eagle fighting a serpent or a bull.", "title": "Practices" }, { "paragraph_id": 27, "text": "Sacred places were known as hima, haram or mahram, and within these places, all living things were considered inviolable and violence was forbidden. In most of Arabia, these places would take the form of open-air sanctuaries, with distinguishing natural features such as springs and forests. Cities would contain temples, enclosing the sacred area with walls, and featuring ornate structures.", "title": "Practices" }, { "paragraph_id": 28, "text": "Sacred areas often had a guardian or a performer of cultic rites. These officials were thought to tend the area, receive offerings, and perform divination. They are known by many names, probably based on cultural-linguistic preference: afkal was used in the Hejaz, kâhin was used in the Sinai-Negev-Hisma region, and kumrâ was used in Aramaic-influenced areas. In South Arabia, rsw and 'fkl were used to refer to priests, and other words include qyn (\"administrator\") and mrtd (\"consecrated to a particular divinity\"). A more specialized staff is thought to have existed in major sanctuaries.", "title": "Practices" }, { "paragraph_id": 29, "text": "Pilgrimages to sacred places would be made at certain times of the year. Pilgrim fairs of central and northern Arabia took place in specific months designated as violence-free, allowing several activities to flourish, such as trade, though in some places only exchange was permitted.", "title": "Practices" }, { "paragraph_id": 30, "text": "The most important pilgrimage in Saba' was probably the pilgrimage of Almaqah at Ma'rib, performed in the month of dhu-Abhi (roughly in July). Two references attest the pilgrimage of Almaqah dhu-Hirran at 'Amran.√ The pilgrimage of Ta'lab Riyam took place in Mount Tur'at and the Zabyan temple at Hadaqan, while the pilgrimage of Dhu-Samawi, the god of the Amir tribe, took place in Yathill. Aside from Sabaean pilgrimages, the pilgrimage of Sayin took place at Shabwa.", "title": "Practices" }, { "paragraph_id": 31, "text": "The pilgrimage of Mecca involved the stations of Mount Arafat, Muzdalifah, Mina and central Mecca that included Safa and Marwa as well as the Kaaba. Pilgrims at the first two stations performed wuquf or standing in adoration. At Mina, animals were sacrificed. The procession from Arafat to Muzdalifah, and from Mina to Mecca, in a pre-reserved route towards idols or an idol, was termed ijaza and ifada, with the latter taking place before sunset. At Jabal Quzah, fires were started during the sacred month.", "title": "Practices" }, { "paragraph_id": 32, "text": "Nearby the Kaaba was located the betyl which was later called Maqam Ibrahim; a place called al-Ḥigr which Aziz al-Azmeh takes to be reserved for consecrated animals, basing his argument on a Sabaean inscription mentioning a place called mḥgr which was reserved for animals; and the Well of Zamzam. Both Safa and Marwa were adjacent to two sacrificial hills, one called Muṭ'im al Ṭayr and another Mujāwir al-Riḥ which was a pathway to Abu Kubais from where the Black Stone is reported to have originated.", "title": "Practices" }, { "paragraph_id": 33, "text": "Meccan pilgrimages differed according to the rites of different cult associations, in which individuals and groups joined for religious purposes. The Ḥilla association performed the hajj in autumn season while the Ṭuls and Ḥums performed the umrah in spring.", "title": "Practices" }, { "paragraph_id": 34, "text": "The Ḥums were the Quraysh, Banu Kinanah, Banu Khuza'a and Banu 'Amir. They did not perform the pilgrimage outside the zone of Mecca's haram, thus excluding Mount Arafat. They also developed certain dietary and cultural restrictions. According to Kitab al-Muhabbar, the Ḥilla denoted most of the Banu Tamim, Qays, Rabi`ah, Qūḍa'ah, Ansar, Khath'am, Bajīlah, Banu Bakr ibn Abd Manat, Hudhayl, Asad, Tayy and Bariq. The Ṭuls comprised the tribes of Yemen and Hadramaut, 'Akk, Ujayb and Īyād. The Basl recognised at least eight months of the calendar as holy. There was also another group which did not recognize the sanctity of Mecca's haram or holy months, unlike the other four.", "title": "Practices" }, { "paragraph_id": 35, "text": "The ancient Arabs that inhabited the Arabian Peninsula before the advent of Islam used to profess a widespread belief in fatalism (ḳadar) alongside a fearful consideration for the sky and the stars, which they held to be ultimately responsible for every phenomena that occurs on Earth and for the destiny of humankind. Accordingly, they shaped their entire lives in accordance with their interpretations of astral configurations and phenomena.", "title": "Practices" }, { "paragraph_id": 36, "text": "In South Arabia, oracles were regarded as ms’l, or \"a place of asking\", and that deities interacted by hr’yhw (\"making them see\") a vision, a dream, or even direct interaction. Otherwise deities interacted indirectly through a medium.", "title": "Practices" }, { "paragraph_id": 37, "text": "There were three methods of chance-based divination attested in pre-Islamic Arabia; two of these methods, making marks in the sand or on rocks and throwing pebbles are poorly attested. The other method, the practice of randomly selecting an arrow with instructions, was widely attested and was common throughout Arabia. A simple form of this practice was reportedly performed before the image of Dhu'l-Khalasa by a certain man, sometimes said to be the Kindite poet Imru al-Qays according to al-Kalbi. A more elaborate form of the ritual was performed in before the image of Hubal. This form of divination was also attested in Palmyra, evidenced by an honorific inscription in the temple of al-Lat.", "title": "Practices" }, { "paragraph_id": 38, "text": "The most common offerings were animals, crops, food, liquids, inscribed metal plaques or stone tablets, aromatics, edifices and manufactured objects. Camel-herding Arabs would devote some of their beasts to certain deities. The beasts would have their ears slit and would be left to pasture without a herdsman, allowing them to die a natural death.", "title": "Practices" }, { "paragraph_id": 39, "text": "Pre-Islamic Arabians, especially pastoralist tribes, sacrificed animals as an offering to a deity. This type of offering was common and involved domestic animals such as camels, sheep and cattle, while game animals and poultry were rarely or never mentioned. Sacrifice rites were not tied to a particular location though they were usually practiced in sacred places. Sacrifice rites could be performed by the devotee, though according to Hoyland, women were probably not allowed. The victim's blood, according to pre-Islamic Arabic poetry and certain South Arabian inscriptions, was also 'poured out' on the altar stone, thus forming a bond between the human and the deity. According to Muslim sources, most sacrifices were concluded with communal feasts.", "title": "Practices" }, { "paragraph_id": 40, "text": "In South Arabia, beginning with the Christian era, or perhaps a short while before, statuettes were presented before the deity, known as slm (male) or slmt (female). Human sacrifice was sometimes carried out in Arabia. The victims were generally prisoners of war, who represented the god's part of the victory in booty, although other forms might have existed.", "title": "Practices" }, { "paragraph_id": 41, "text": "Blood sacrifice was definitely practiced in South Arabia, but few allusions to the practice are known, apart from some Minaean inscriptions.", "title": "Practices" }, { "paragraph_id": 42, "text": "In the Hejaz, menstruating women were not allowed to be near the cult images. The area where Isaf and Na'ila's images stood was considered out-of-bounds for menstruating women. This was reportedly the same with Manaf. According to the Book of Idols, this rule applied to all the \"idols\". This was also the case in South Arabia, as attested in a South Arabian inscription from al-Jawf.", "title": "Practices" }, { "paragraph_id": 43, "text": "Sexual intercourse in temples was prohibited, as attested in two South Arabian inscriptions. One legend concerning Isaf and Na'ila, when two lovers made love in the Kaaba and were petrified, joining the idols in the Kaaba, echoes this prohibition.", "title": "Practices" }, { "paragraph_id": 44, "text": "The Dilmun civilization, which existed along the Persian Gulf coast and Bahrain until the 6th century BC, worshipped a pair of deities, Inzak and Meskilak. It is not known whether these were the only deities in the pantheon or whether there were others. The discovery of wells at the sites of a Dilmun temple and a shrine suggests that sweet water played an important part in religious practices.", "title": "By geography" }, { "paragraph_id": 45, "text": "In the subsequent Greco-Roman period, there is evidence that the worship of non-indigenous deities was brought to the region by merchants and visitors. These included Bel, a god popular in the Syrian city of Palmyra, the Mesopotamian deities Nabu and Shamash, the Greek deities Poseidon and Artemis and the west Arabian deities Kahl and Manat.", "title": "By geography" }, { "paragraph_id": 46, "text": "The main sources of religious information in pre-Islamic South Arabia are inscriptions, which number in the thousands, as well as the Quran, complemented by archaeological evidence.", "title": "By geography" }, { "paragraph_id": 47, "text": "The civilizations of South Arabia are considered to have the most developed pantheon in the Arabian peninsula. In South Arabia, the most common god was 'Athtar, who was considered remote. The patron deity (shym) was considered to be of much more immediate significance than 'Athtar. Thus, the kingdom of Saba' had Almaqah, the kingdom of Ma'in had Wadd, the kingdom of Qataban had 'Amm, and the kingdom of Hadhramaut had Sayin. Each people was termed the \"children\" of their respective patron deity. Patron deities played a vital role in sociopolitical terms, their cults serving as the focus of a person's cohesion and loyalty.", "title": "By geography" }, { "paragraph_id": 48, "text": "Evidence from surviving inscriptions suggests that each of the southern kingdoms had its own pantheon of three to five deities, the major deity always being a god. For example, the pantheon of Saba comprised Almaqah, the major deity, together with 'Athtar, Haubas, Dhat-Himyam, and Dhat-Badan. The main god in Ma'in and Himyar was 'Athtar, in Qataban it was Amm, and in Hadhramaut it was Sayin. 'Amm was a lunar deity and was associated with the weather, especially lightning. One of the most frequent titles of the god Almaqah was \"Lord of Awwam\".", "title": "By geography" }, { "paragraph_id": 49, "text": "Anbay was an oracular god of Qataban and also the spokesman of Amm. His name was invoked in royal regulations regarding water supply. Anbay's name was related to that of the Babylonian deity Nabu. Hawkam was invoked alongside Anbay as god of \"command and decision\" and his name is derived from the root word \"to be wise\".", "title": "By geography" }, { "paragraph_id": 50, "text": "Each kingdom's central temple was the focus of worship for the main god and would be the destination for an annual pilgrimage, with regional temples dedicated to a local manifestation of the main god. Other beings worshipped included local deities or deities dedicated to specific functions as well as deified ancestors.", "title": "By geography" }, { "paragraph_id": 51, "text": "The encroachment of northern Arab tribes into South Arabia also introduced northern Arab deities into the region. The three goddesses al-Lat, al-Uzza and Manat became known as Lat/Latan, Uzzayan and Manawt. Uzzayan's cult in particular was widespread in South Arabia, and in Qataban she was invoked as a guardian of the final royal palace. Lat/Latan was not significant in South Arabia, but appears to be popular with the Arab tribes bordering Yemen. Other Arab deities include Dhu-Samawi, a god originally worshipped by the Amir tribe, and Kahilan, perhaps related to Kahl of Qaryat al-Faw.", "title": "By geography" }, { "paragraph_id": 52, "text": "Bordering Yemen, the Azd Sârat tribe of the Asir region was said to have worshipped Dhu'l-Shara, Dhu'l-Kaffayn, Dhu'l-Khalasa and A'im. According to the Book of Idols, Dhu'l-Kaffayn originated from a clan of the Banu Daws. In addition to being worshipped among the Azd, Dushara is also reported to have a shrine amongst the Daws. Dhu’l-Khalasa was an oracular god and was also worshipped by the Bajila and Khatham tribes.", "title": "By geography" }, { "paragraph_id": 53, "text": "Before conversion to Christianity, the Aksumites followed a polytheistic religion that was similar to that of Southern Arabia. The lunar god Hawbas was worshiped in South Arabia and Aksum. The name of the god Astar, a sky-deity was related to that of 'Attar. The god Almaqah was worshiped at Hawulti-Melazo. The South Arabian gods in Aksum included Dhat-Himyam and Dhat-Ba'adan. A stone later reused for the church of Enda-Cerqos at Melazo mentions these gods. Hawbas is also mentioned on an altar and sphinx in Dibdib. The name of Nrw who is mentioned in Aksum inscriptions is related to that of the South Arabian god Nawraw, a deity of stars.", "title": "By geography" }, { "paragraph_id": 54, "text": "The Himyarite kings radically opposed polytheism in favor of Judaism, beginning officially in 380. The last trace of polytheism in South Arabia, an inscription commemorating a construction project with a polytheistic invocation, and another, mentioning the temple of Ta’lab, all date from just after 380 (the former dating to the rule of the king Dhara’amar Ayman, and the latter dating to the year 401–402). The rejection of polytheism from the public sphere did not mean the extinction of it altogether, as polytheism likely continued in the private sphere.", "title": "By geography" }, { "paragraph_id": 55, "text": "The Kinda tribe's chief god was Kahl, whom their capital Qaryat Dhat Kahl (modern Qaryat al-Faw) was named for. His name appears in the form of many inscriptions and rock engravings on the slopes of the Tuwayq, on the walls of the souk of the village, in the residential houses and on the incense burners. An inscription in Qaryat Dhat Kahl invokes the gods Kahl, Athtar al-Shariq and Lah.", "title": "By geography" }, { "paragraph_id": 56, "text": "According to Islamic sources, the Hejaz region was home to three important shrines dedicated to al-Lat, al-’Uzza and Manat. The shrine and idol of al-Lat, according to the Book of Idols, once stood in Ta'if, and was primarily worshipped by the Banu Thaqif tribe. Al-’Uzza's principal shrine was in Nakhla and she was the chief-goddess of the Quraysh tribe. Manāt's idol, reportedly the oldest of the three, was erected on the seashore between Medina and Mecca, and was honored by the Aws and Khazraj tribes. Inhabitants of several areas venerated Manāt, performing sacrifices before her idol, and pilgrimages of some were not considered completed until they visited Manāt and shaved their heads.", "title": "By geography" }, { "paragraph_id": 57, "text": "In the Muzdalifah region near Mecca, the god Quzah, who is a god of rains and storms, was worshipped. In pre-Islamic times pilgrims used to halt at the \"hill of Quzah\" before sunrise. Qusai ibn Kilab is traditionally reported to have introduced the association of fire worship with him on Muzdalifah.", "title": "By geography" }, { "paragraph_id": 58, "text": "Various other deities were venerated in the area by specific tribes, such as the god Suwa' by the Banu Hudhayl tribe and the god Nuhm by the Muzaynah tribe.", "title": "By geography" }, { "paragraph_id": 59, "text": "The majority of extant information about Mecca during the rise of Islam and earlier times comes from the text of the Quran itself and later Muslim sources such as the prophetic biography literature dealing with the life of Muhammad and the Book of Idols. Alternative sources are so fragmentary and specialized that writing a convincing history of this period based on them alone is impossible. Several scholars hold that the sīra literature is not independent of the Quran but has been fabricated to explain the verses of the Quran. There is evidence to support the contention that some reports of the sīras are of dubious validity, but there is also evidence to support the contention that the sīra narratives originated independently of the Quran. Compounding the problem is that the earliest extant Muslim historical works, including the sīras, were composed in their definitive form more than a century after the beginning of the Islamic era. Some of these works were based on subsequently lost earlier texts, which in their turn recorded a fluid oral tradition. Scholars do not agree as to the time when such oral accounts began to be systematically collected and written down, and they differ greatly in their assessment of the historical reliability of the available texts.", "title": "By geography" }, { "paragraph_id": 60, "text": "The Kaaba, whose environs were regarded as sacred (haram), became a national shrine under the custodianship of the Quraysh, the chief tribe of Mecca, which made the Hejaz the most important religious area in north Arabia. Its role was solidified by a confrontation with the Christian king Abraha, who controlled much of Arabia from a seat of power in Yemen in the middle of the sixth century. Abraha had recently constructed a splendid church in Sana'a, and he wanted to make that city a major center of pilgrimage, but Mecca's Kaaba presented a challenge to his plan. Abraha found a pretext for an attack on Mecca, presented by different sources alternatively as pollution of the church by a tribe allied to the Meccans or as an attack on Abraha's grandson in Najran by a Meccan party. The defeat of the army he assembled to conquer Mecca is recounted with miraculous details by the Islamic tradition and is also alluded to in the Quran and pre-Islamic poetry. After the battle, which probably occurred around 565, the Quraysh became a dominant force in western Arabia, receiving the title \"God's people\" (ahl Allah) according to Islamic sources, and formed the cult association of ḥums, which tied members of many tribes in western Arabia to the Kaaba.", "title": "By geography" }, { "paragraph_id": 61, "text": "According to tradition, the Kaaba was a cube-like, originally roofless structure housing a black stone revered as a relic. The sanctuary was dedicated to Hubal (Arabic: هبل), who, according to some sources, was worshiped as the greatest of the 360 idols the Kaaba contained, which probably represented the days of the year. Ibn Ishaq and Ibn Al-Kalbi both report that the human-shaped idol of Hubal made of precious stone (agate, according to the Book of Idols) came into the possession of the Quraysh with its right hand broken off and that the Quraysh made a hand of gold to replace it. A soothsayer performed divination in the shrine by drawing ritual arrows, and vows and sacrifices were made to assure success. Marshall Hodgson argues that relations with deities and fetishes in pre-Islamic Mecca were maintained chiefly on the basis of bargaining, where favors were expected in return for offerings. A deity's or oracle's failure to provide the desired response was sometimes met with anger.", "title": "By geography" }, { "paragraph_id": 62, "text": "Different theories have been proposed regarding the role of Allah in Meccan religion. According to one hypothesis, which goes back to Julius Wellhausen, Allah (the supreme deity of the tribal federation around Quraysh) was a designation that consecrated the superiority of Hubal (the supreme deity of Quraysh) over the other gods. However, there is also evidence that Allah and Hubal were two distinct deities. According to that hypothesis, the Kaaba was first consecrated to a supreme deity named Allah and then hosted the pantheon of Quraysh after their conquest of Mecca, about a century before the time of Muhammad. Some inscriptions seem to indicate the use of Allah as a name of a polytheist deity centuries earlier, but we know nothing precise about this use. Some scholars have suggested that Allah may have represented a remote creator god who was gradually eclipsed by more particularized local deities. There is disagreement on whether Allah played a major role in the Meccan religious cult. No iconic representation or idol of Allah is known to have existed.", "title": "By geography" }, { "paragraph_id": 63, "text": "The three chief goddesses of Meccan religion were al-Lat, Al-‘Uzzá, and Manāt, who were called the daughters of Allah. Egerton Sykes meanwhile states that Al-lāt was the female counterpart of Allah while Uzza was a name given by Banu Ghatafan to the planet Venus.", "title": "By geography" }, { "paragraph_id": 64, "text": "Other deities of the Quraysh in Mecca included Manaf, Isaf and Na’ila. Although the early Arab historian Al-Tabari calls Manaf (Arabic: مناف) \"one of the greatest deities of Mecca\", very little information is available about it. Women touched his idol as a token of blessing, and kept away from it during menstruation. Gonzague Ryckmans described this as a practice peculiar to Manaf, but according to the Encyclopedia of Islam, a report from Ibn Al-Kalbi indicates that it was common to all idols. Muhammad's great-great-grandfather's name was Abd Manaf which means \"slave of Manaf\". He is thought by some scholars to be a sun-god. The idols of Isāf and Nā'ila were located near the Black Stone with a talbiyah performed to Isāf during sacrifices. Various legends existed about the idols, including one that they were petrified after they committed adultery in the Kaaba.", "title": "By geography" }, { "paragraph_id": 65, "text": "The pantheon of the Quraysh was not identical with that of the tribes who entered into various cult and commercial associations with them, especially that of the hums. Christian Julien Robin argues that the former was composed principally of idols that were in the sanctuary of Mecca, including Hubal and Manaf, while the pantheon of the associations was superimposed on it, and its principal deities included the three goddesses, who had neither idols nor a shrine in that city.", "title": "By geography" }, { "paragraph_id": 66, "text": "The second half of the sixth century was a period of political disorder in Arabia and communication routes were no longer secure. Religious divisions were an important cause of the crisis. Judaism became the dominant religion in Yemen while Christianity took root in the Persian Gulf area. In line with the broader trends of the ancient world, Arabia yearned for a more spiritual form of religion and began believing in afterlife, while the choice of religion increasingly became a personal rather than communal choice. While many were reluctant to convert to a foreign faith, those faiths provided intellectual and spiritual reference points, and the old pagan vocabulary of Arabic began to be replaced by Jewish and Christian loanwords from Aramaic everywhere, including Mecca. The distribution of pagan temples supports Gerald Hawting's argument that Arabian polytheism was marginalized in the region and already dying in Mecca on the eve of Islam. The practice of polytheistic cults was increasingly limited to the steppe and the desert, and in Yathrib (later known as Medina), which included two tribes with polytheistic majorities, the absence of a public pagan temple in the town or its immediate neighborhood indicates that polytheism was confined to the private sphere. Looking at the text of the Quran itself, Hawting has also argued that the criticism of idolaters and polytheists contained in Quran is in fact a hyperbolic reference to other monotheists, in particular the Arab Jews and Arab Christians, whose religious beliefs were considered imperfect. According to some traditions, the Kaaba contained no statues, but its interior was decorated with images of Mary and Jesus, prophets, angels, and trees.", "title": "By geography" }, { "paragraph_id": 67, "text": "To counter the effects of anarchy, the institution of sacred months, during which every act of violence was prohibited, was reestablished. During those months, it was possible to participate in pilgrimages and fairs without danger. The Quraysh upheld the principle of two annual truces, one of one month and the second of three months, which conferred a sacred character to the Meccan sanctuary. The cult association of hums, in which individuals and groups partook in the same rites, was primarily religious, but it also had important economic consequences. Although, as Patricia Crone has shown, Mecca could not compare with the great centers of caravan trade on the eve of Islam, it was probably one of the most prosperous and secure cities of the peninsula, since, unlike many of them, it did not have surrounding walls. Pilgrimage to Mecca was a popular custom. Some Islamic rituals, including processions around the Kaaba and between the hills of al-Safa and Marwa, as well as the salutation \"we are here, O Allah, we are here\" repeated on approaching the Kaaba are believed to have antedated Islam. Spring water acquired a sacred character in Arabia early on and Islamic sources state that the well of Zamzam became holy long before the Islamic era.", "title": "By geography" }, { "paragraph_id": 68, "text": "According to Ibn Sa'd, the opposition in Mecca started when the prophet of Islam, Muhammad, delivered verses that \"spoke shamefully of the idols they (the Meccans) worshiped other than Himself (God) and mentioned the perdition of their fathers who died in disbelief\". According to William Montgomery Watt, as the ranks of Muhammad's followers swelled, he became a threat to the local tribes and the rulers of the city, whose wealth rested upon the Kaaba, the focal point of Meccan religious life, which Muhammad threatened to overthrow. Muhammad's denunciation of the Meccan traditional religion was especially offensive to his own tribe, the Quraysh, as they were the guardians of the Kaaba.", "title": "By geography" }, { "paragraph_id": 69, "text": "The conquest of Mecca around 629–630 AD led to the destruction of the idols around the Kaaba, including Hubal. Following the conquest, shrines and temples dedicated to deities were destroyed, such as the shrines to al-Lat, al-’Uzza and Manat in Ta’if, Nakhla and al-Qudayd respectively.", "title": "By geography" }, { "paragraph_id": 70, "text": "Less complex societies outside South Arabia often had smaller pantheons, with the patron deity having much prominence. The deities attested in north Arabian inscriptions include Ruda, Nuha, Allah, Dathan, and Kahl. Inscriptions in a North Arabian dialect in the region of Najd referring to Nuha describe emotions as a gift from him. In addition, they also refer to Ruda being responsible for all things good and bad.", "title": "By geography" }, { "paragraph_id": 71, "text": "The Safaitic tribes in particular prominently worshipped the goddess al-Lat as a bringer of prosperity. The Syrian god Baalshamin was also worshipped by Safaitic tribes and is mentioned in Safaitic inscriptions.", "title": "By geography" }, { "paragraph_id": 72, "text": "Religious worship amongst the Qedarites, an ancient tribal confederation that was probably subsumed into Nabataea around the 2nd century AD, was centered around a polytheistic system in which women rose to prominence. Divine images of the gods and goddesses worshipped by Qedarite Arabs, as noted in Assyrian inscriptions, included representations of Atarsamain, Nuha, Ruda, Dai, Abirillu and Atarquruma. The female guardian of these idols, usually the reigning queen, served as a priestess (apkallatu, in Assyrian texts) who communed with the other world. There is also evidence that the Qedar worshipped al-Lat to whom the inscription on a silver bowl from a king of Qedar is dedicated. In the Babylonian Talmud, which was passed down orally for centuries before being transcribed c. 500 AD, in tractate Taanis (folio 5b), it is said that most Qedarites worshiped pagan gods.", "title": "By geography" }, { "paragraph_id": 73, "text": "The Aramaic stele inscription discovered by Charles Hubert in 1880 at Tayma mentions the introduction of a new god called Salm of hgm into the city's pantheon being permitted by three local gods – Salm of Mahram who was the chief god, Shingala, and Ashira. The name Salm means \"image\" or \"idol\".", "title": "By geography" }, { "paragraph_id": 74, "text": "The Midianites, a people referred to in the Book of Genesis and located in north-western Arabia, may have worshipped Yahweh. Indeed, some scholars believe that Yahweh was originally a Midianite god and that he was subsequently adopted by the Israelites. An Egyptian temple of Hathor continued to be used during the Midianite occupation of the site, although images of Hathor were defaced suggesting Midianite opposition. They transformed it into a desert tent-shrine set up with a copper sculpture of a snake.", "title": "By geography" }, { "paragraph_id": 75, "text": "The Lihyanites worshipped the god Dhu-Ghabat and rarely turned to others for their needs. Dhu-Ghabat's name means \"he of the thicket\", based on the etymology of gabah, meaning forest or thicket. The god al-Kutba', a god of writing probably related to a Babylonian deity and perhaps was brought into the region by the Babylonian king Nabonidus, is mentioned in Lihyanite inscriptions as well. The worship of the Hermonian gods Leucothea and Theandrios was spread from Phoenicia to Arabia.", "title": "By geography" }, { "paragraph_id": 76, "text": "According to the Book of Idols, the Tayy tribe worshipped al-Fals, whose idol stood on Jabal Aja, while the Kalb tribe worshipped Wadd, who had an idol in Dumat al-Jandal.", "title": "By geography" }, { "paragraph_id": 77, "text": "The Nabataeans worshipped primarily northern Arabian deities. Under foreign influences, they also incorporated foreign deities and elements into their beliefs.", "title": "By geography" }, { "paragraph_id": 78, "text": "The Nabataeans' chief-god is Dushara. In Petra, the only major goddess is Al-‘Uzzá, assuming the traits of Isis, Tyche and Aphrodite. It is unknown if her worship and identity is related to her cult at Nakhla and others. The Nabatean inscriptions define Allāt and Al-Uzza as the \"bride of Dushara\". Al-Uzza may have been an epithet of Allāt in the Nabataean religion according to John F. Healey.", "title": "By geography" }, { "paragraph_id": 79, "text": "Outside Petra, other deities were worshipped; for example, Hubal and Manat were invoked in the Hejaz, and al-Lat was invoked in the Hauran and the Syrian desert. The Nabataean king Obodas I, who founded Obodat, was deified and worshipped as a god. They also worshipped Shay al-Qawm, al-Kutba', and various Greco-Roman deities such as Nike and Tyche. Maxime Rodinson suggests that Hubal, who was popular in Mecca, had a Nabataean origin.", "title": "By geography" }, { "paragraph_id": 80, "text": "The worship of Pakidas, a Nabataean god, is attested at Gerasa alongside Hera in an inscription dated to the first century A.D. while an Arabian god is also attested by three inscriptions dated to the second century.", "title": "By geography" }, { "paragraph_id": 81, "text": "The Nabataeans were known for their elaborate tombs, but they were not just for show; they were meant to be comfortable places for the dead. Petra has many \"sacred high places\" which include altars that have usually been interpreted as places of human sacrifice, although, since the 1960s, an alternative theory that they are \"exposure platforms\" for placing the corpses of the deceased as part of a funerary ritual has been put forward. However, there is, in fact, little evidence for either proposition.", "title": "By geography" }, { "paragraph_id": 82, "text": "Palmyra was a cosmopolitan society, with its population being a mix of Aramaeans and Arabs. The Arabs of Palmyra worshipped al-Lat, Rahim and Shamash. The temple of al-Lat was established by the Bene Ma'zin tribe, who were probably an Arab tribe. The nomads of the countryside worshipped a set of deities, bearing Arab names and attributes, most prominent of them was Abgal, who himself is not attested in Palmyra itself. Ma'n, an Arab god, was worshipped alongside Abgal in a temple dedicated in 195 AD at Khirbet Semrin in the Palmyrene region while an inscription dated 194 AD at Ras esh-Shaar calls him the \"good and bountiful god\". A stele at Ras esh-Shaar shows him riding a horse with a lance while the god Saad is riding a camel. Abgal, Ma'n and Sa'd were known as the genii.", "title": "By geography" }, { "paragraph_id": 83, "text": "The god Ashar was represented on a stele in Dura-Europos alongside another god Sa'd. The former was represented on a horse with Arab dress while the other was shown standing on the ground. Both had Parthian hairstyle, large facial hair and moustaches as well as similar clothing. Ashar's name is found to have been used in a theophoric manner among the Arab-majority areas of the region of the Northwest Semitic languages, like Hatra, where names like \"Refuge of Ashar\", \"Servant of Ashar\" and \"Ashar has given\" are recorded on an inscription.", "title": "By geography" }, { "paragraph_id": 84, "text": "In Edessa, the solar deity was the primary god around the time of the Roman Emperor Julian and this worship was presumably brought in by migrants from Arabia. Julian's oration delivered to the denizens of the city mentioned that they worshipped the Sun surrounded by Azizos and Monimos whom Iamblichus identified with Ares and Hermes respectively. Monimos derived from Mu'nim or \"the favourable one\", and was another name of Ruda or Ruldaiu as apparent from spellings of his name in Sennacherib's Annals.", "title": "By geography" }, { "paragraph_id": 85, "text": "The idol of the god al-Uqaysir was, according to the Book of Idols, located in Syria, and was worshipped by the tribes of Quda'a, Lakhm, Judham, Amela, and Ghatafan. Adherents would go on a pilgrimage to the idol and shave their heads, then mix their hair with wheat, \"for every single hair a handful of wheat\".", "title": "By geography" }, { "paragraph_id": 86, "text": "A shrine to Dushara has been discovered in the harbour of ancient Puteoli in Italy. The city was an important nexus for trade to the Near East, and it is known to have had a Nabataean presence during the mid 1st century BCE. A Minaean altar dedicated to Wadd evidently existed in Delos, containing two inscriptions in Minaean and Greek respectively.", "title": "By geography" }, { "paragraph_id": 87, "text": "The Bedouin were introduced to Meccan ritualistic practices as they frequented settled towns of the Hejaz during the four months of the \"holy truce\", the first three of which were devoted to religious observance, while the fourth was set aside for trade. Alan Jones infers from Bedouin poetry that the gods, even Allah, were less important to the Bedouins than Fate. They seem to have had little trust in rituals and pilgrimages as means of propitiating Fate, but had recourse to divination and soothsayers (kahins). The Bedouins regarded some trees, wells, caves and stones as sacred objects, either as fetishes or as means of reaching a deity. They created sanctuaries where people could worship fetishes.", "title": "By geography" }, { "paragraph_id": 88, "text": "The Bedouins had a code of honor which Fazlur Rahman Malik states may be regarded as their religious ethics. This code encompassed women, bravery, hospitality, honouring one's promises and pacts, and vengeance. They believed that the ghost of a slain person would cry out from the grave until their thirst for blood was quenched. Practices such as killing of infant girls were often regarded as having religious sanction. Numerous mentions of jinn in the Quran and testimony of both pre-Islamic and Islamic literature indicate that the belief in spirits was prominent in pre-Islamic Bedouin religion. However, there is evidence that the word jinn is derived from Aramaic, ginnaye, which was widely attested in Palmyrene inscriptions. The Aramaic word was used by Christians to designate pagan gods reduced to the status of demons, and was introduced into Arabic folklore only late in the pre-Islamic era. Julius Wellhausen has observed that such spirits were thought to inhabit desolate, dingy and dark places and that they were feared. One had to protect oneself from them, but they were not the objects of a true cult.", "title": "By geography" }, { "paragraph_id": 89, "text": "Bedouin religious experience also included an apparently indigenous cult of ancestors. The dead were not regarded as powerful, but rather as deprived of protection and needing charity of the living as a continuation of social obligations beyond the grave. Only certain ancestors, especially heroes from which the tribe was said to derive its name, seem to have been objects of real veneration.", "title": "By geography" }, { "paragraph_id": 90, "text": "Iranian religions existed in pre-Islamic Arabia on account of Sasanian military presence along the Persian Gulf and South Arabia and on account of trade routes between the Hejaz and Iraq. Some Arabs in northeast of the peninsula converted to Zoroastrianism and several Zoroastrian temples were constructed in Najd. Some of the members from the tribe of Banu Tamim had converted to the religion. There is also evidence of existence of Manichaeism in Arabia as several early sources indicate a presence of \"zandaqas\" in Mecca, although the term could also be interpreted as referring to Mazdakism. However, according to the most recent research by Tardieu, the prevalence of Manichaeism in Mecca during the 6th and 7th centuries, when Islam emerged, can not be proven. Similar reservations regarding the appearance of Manichaeism and Mazdakism in pre-Islamic Mecca are offered by Trompf & Mikkelsen et al. in their latest work (2018). There is evidence for the circulation of Iranian religious ideas in the form of Persian loan words in Quran such as firdaws (paradise).", "title": "Other religions" }, { "paragraph_id": 91, "text": "Zoroastrianism was also present in Eastern Arabia and Persian-speaking Zoroastrians lived in the region. The religion was introduced in the region including modern-day Bahrain during the rule of Persian empires in the region starting from 250 B.C. It was mainly practiced in Bahrain by Persian settlers. Zoroastrianism was also practiced in the Persian-ruled area of modern-day Oman. The religion also existed in Persian-ruled area of modern Yemen. The descendants of Abna, the Persian conquerors of Yemen, were followers of Zoroastrianism. Yemen's Zoroastrians who had the jizya imposed on them after being conquered by Muhammad are mentioned by the Islamic historian al-Baladhuri. According to Serjeant, the Baharna people may be the Arabized descendants of converts from the original population of ancient Persians (majus) as well as other religions.", "title": "Other religions" }, { "paragraph_id": 92, "text": "A thriving community of Jewish tribes existed in pre-Islamic Arabia and included both sedentary and nomadic communities. Jews had migrated into Arabia from Roman times onwards. Arabian Jews spoke Arabic as well as Hebrew and Aramaic and had contact with Jewish religious centers in Babylonia and Palestine. The Yemeni Himyarites converted to Judaism in the 4th century, and some of the Kinda were also converted in the 4th/5th century. Jewish tribes existed in all major Arabian towns during Muhammad's time including in Tayma and Khaybar as well as Medina with twenty tribes living in the peninsula. From tomb inscriptions, it is visible that Jews also lived in Mada'in Saleh and Al-'Ula.", "title": "Other religions" }, { "paragraph_id": 93, "text": "There is evidence that Jewish converts in the Hejaz were regarded as Jews by other Jews, as well as by non-Jews, and sought advice from Babylonian rabbis on matters of attire and kosher food. In at least one case, it is known that an Arab tribe agreed to adopt Judaism as a condition for settling in a town dominated by Jewish inhabitants. Some Arab women in Yathrib/Medina are said to have vowed to make their child a Jew if the child survived, since they considered the Jews to be people \"of knowledge and the book\" (ʿilmin wa-kitābin). Philip Hitti infers from proper names and agricultural vocabulary that the Jewish tribes of Yathrib consisted mostly of Judaized clans of Arabian and Aramaean origin.", "title": "Other religions" }, { "paragraph_id": 94, "text": "The key role played by Jews in the trade and markets of the Hejaz meant that market day for the week was the day preceding the Jewish Sabbath. This day, which was called aruba in Arabic, also provided occasion for legal proceedings and entertainment, which in turn may have influenced the choice of Friday as the day of Muslim congregational prayer. Toward the end of the sixth century, the Jewish communities in the Hejaz were in a state of economic and political decline, but they continued to flourish culturally in and beyond the region. They had developed their distinctive beliefs and practices, with a pronounced mystical and eschatological dimension. In the Islamic tradition, based on a phrase in the Quran, Arab Jews are said to have referred to Uzair as the son of Allah, although the historical accuracy of this assertion has been disputed.", "title": "Other religions" }, { "paragraph_id": 95, "text": "Jewish agriculturalists lived in the region of Eastern Arabia. According to Robert Bertram Serjeant, the Baharna may be the Arabized \"descendants of converts from Christians (Arameans), Jews and ancient Persians (Majus) inhabiting the island and cultivated coastal provinces of Eastern Arabia at the time of the Arab conquest\". From the Islamic sources, it seems that Judaism was the religion most followed in Yemen. Ya'qubi claimed all Yemenites to be Jews; Ibn Hazm however states only Himyarites and some Kindites were Jews.", "title": "Other religions" }, { "paragraph_id": 96, "text": "The main areas of Christian influence in Arabia were on the northeastern and northwestern borders and in what was to become Yemen in the south. The north west was under the influence of Christian missionary activity from the Roman Empire where the Ghassanids, a client kingdom of the Romans, were converted to Christianity. In the south, particularly at Najran, a centre of Christianity developed as a result of the influence of the Christian Kingdom of Axum based on the other side of the Red Sea in Ethiopia. Some of the Banu Harith had converted to Christianity. One family of the tribe built a large church at Najran called Deir Najran, also known as the \"Ka'ba of Najran\". Both the Ghassanids and the Christians in the south adopted Monophysitism.", "title": "Other religions" }, { "paragraph_id": 97, "text": "The third area of Christian influence was on the north eastern borders where the Lakhmids, a client tribe of the Sassanians, adopted Nestorianism, being the form of Christianity having the most influence in the Sassanian Empire. As the Persian Gulf region of Arabia increasingly fell under the influence of the Sassanians from the early third century, many of the inhabitants were exposed to Christianity following the eastward dispersal of the religion by Mesopotamian Christians. However, it was not until the fourth century that Christianity gained popularity in the region with the establishment of monasteries and a diocesan structure.", "title": "Other religions" }, { "paragraph_id": 98, "text": "In pre-Islamic times, the population of Eastern Arabia consisted of Christianized Arabs (including Abd al-Qays) and Aramean Christians among other religions. Syriac functioned as a liturgical language. Serjeant states that the Baharna may be the Arabized descendants of converts from the original population of Christians (Aramaeans), among other religions at the time of Arab conquests. Beth Qatraye, which translates \"region of the Qataris\" in Syriac, was the Christian name used for the region encompassing north-eastern Arabia. It included Bahrain, Tarout Island, Al-Khatt, Al-Hasa, and Qatar. Oman and what is today the United Arab Emirates comprised the diocese known as Beth Mazunaye. The name was derived from 'Mazun', the Persian name for Oman and the United Arab Emirates. Sohar was the central city of the diocese.", "title": "Other religions" }, { "paragraph_id": 99, "text": "In Nejd, in the centre of the peninsula, there is evidence of members of two tribes, Kinda and Taghlib, converting to Christianity in the 6th century. However, in the Hejaz in the west, whilst there is evidence of the presence of Christianity, it is not thought to have been significant amongst the indigenous population of the area.", "title": "Other religions" }, { "paragraph_id": 100, "text": "Arabicized Christian names were fairly common among pre-Islamic Arabians, which has been attributed to the influence that Syrianized Christian Arabs had on Bedouins of the peninsula for several centuries before the rise of Islam.", "title": "Other religions" }, { "paragraph_id": 101, "text": "Neal Robinson, based on verses in the Quran, believes that some Arab Christians may have held unorthodox beliefs such as the worshipping of a divine triad of God the father, Jesus the Son and Mary the Mother. Furthermore, there is evidence that unorthodox groups such as the Collyridians, whose adherents worshipped Mary, were present in Arabia, and it has been proposed that the Quran refers to their beliefs. However, other scholars, notably Mircea Eliade, William Montgomery Watt, G. R. Hawting and Sidney H. Griffith, cast doubt on the historicity or reliability of such references in the Quran. Their views are as follows:", "title": "Other religions" } ]
Religion in pre-Islamic Arabia included indigenous Arabian polytheism, ancient Semitic religions, Christianity, Judaism, Mandaeism, and Zoroastrianism. Arabian polytheism, the dominant form of religion in pre-Islamic Arabia, was based on veneration of deities and spirits. Worship was directed to various gods and goddesses, including Hubal and the goddesses al-Lāt, al-‘Uzzā, and Manāt, at local shrines and temples such as the Kaaba in Mecca. Deities were venerated and invoked through a variety of rituals, including pilgrimages and divination, as well as ritual sacrifice. Different theories have been proposed regarding the role of Allah in Meccan religion. Many of the physical descriptions of the pre-Islamic gods are traced to idols, especially near the Kaaba, which is said to have contained up to 360 of them. Other religions were represented to varying, lesser degrees. The influence of the adjacent Roman and Aksumite civilizations resulted in Christian communities in the northwest, northeast, and south of Arabia. Christianity made a lesser impact in the remainder of the peninsula, but did secure some conversions. With the exception of Nestorianism in the northeast and the Persian Gulf, the dominant form of Christianity was Miaphysitism. The peninsula had been a destination for Jewish migration since Roman times, which had resulted in a diaspora community supplemented by local converts. Additionally, the influence of the Sasanian Empire resulted in Iranian religions being present in the peninsula. Zoroastrianism existed in the east and south, while there is evidence of either Manichaeism or Mazdakism being possibly practiced in Mecca.
2001-12-15T02:27:16Z
2023-12-19T18:31:44Z
[ "Template:Reflist", "Template:Harvnb", "Template:Short description", "Template:Good article", "Template:Islam and other religions", "Template:Sfn", "Template:Transl", "Template:Pre-Islamic Arabian deities", "Template:Cite journal", "Template:Refbegin", "Template:Religion topics", "Template:Fertile Crescent myth", "Template:Arab culture", "Template:Paganism", "Template:About", "Template:Main", "Template:Further", "Template:Citation", "Template:Refend", "Template:Anchor", "Template:Cite encyclopedia", "Template:Cite book", "Template:Cite web", "Template:Pre-Islamic Arabia" ]
https://en.wikipedia.org/wiki/Religion_in_pre-Islamic_Arabia
15,392
Imperial Conference
Imperial Conferences (Colonial Conferences before 1907) were periodic gatherings of government leaders from the self-governing colonies and dominions of the British Empire between 1887 and 1937, before the establishment of regular Meetings of Commonwealth Prime Ministers in 1944. They were held in 1887, 1894, 1897, 1902, 1907, 1911, 1921, 1923, 1926, 1930, 1932 and 1937. All the conferences were held in London, the seat of the Empire, except for the 1894 and 1932 conferences which were held in Ottawa, the capital of the senior Dominion of the Crown. The 1907 conference changed the name of the meetings to Imperial Conferences and agreed that the meetings should henceforth be regular rather than taking place while overseas statesmen were visiting London for royal occasions (e.g. jubilees and coronations). Originally instituted to emphasise imperial unity, as time went on, the conferences became a key forum for dominion governments to assert the desire for removing the remaining vestiges of their colonial status. The conference of 1926 agreed to the Balfour Declaration, which acknowledged that the dominions would henceforth rank as equals to the United Kingdom, as members of the 'British Commonwealth of Nations'. The conference of 1930 decided to abolish the legislative supremacy of the British Parliament as it was expressed through the Colonial Laws Validity Act and other Imperial Acts. The statesmen recommended that a declaratory enactment of Parliament, which became the Statute of Westminster 1931, be passed with the consent of the dominions, but some dominions did not ratify the statute until some years afterwards. The 1930 conference was notable, too, for the attendance of Southern Rhodesia, despite it being a self-governing colony, not a dominion. As World War II drew to a close, Imperial Conferences were replaced by Commonwealth Prime Ministers' Conferences, with 17 such meetings occurring from 1944 until 1969, all but one of the meetings occurred in London. The gatherings were renamed Commonwealth Heads of Government Meetings (CHOGM) in 1971 and were henceforth held every two years with hosting duties rotating around the Commonwealth.
[ { "paragraph_id": 0, "text": "Imperial Conferences (Colonial Conferences before 1907) were periodic gatherings of government leaders from the self-governing colonies and dominions of the British Empire between 1887 and 1937, before the establishment of regular Meetings of Commonwealth Prime Ministers in 1944. They were held in 1887, 1894, 1897, 1902, 1907, 1911, 1921, 1923, 1926, 1930, 1932 and 1937.", "title": "" }, { "paragraph_id": 1, "text": "All the conferences were held in London, the seat of the Empire, except for the 1894 and 1932 conferences which were held in Ottawa, the capital of the senior Dominion of the Crown. The 1907 conference changed the name of the meetings to Imperial Conferences and agreed that the meetings should henceforth be regular rather than taking place while overseas statesmen were visiting London for royal occasions (e.g. jubilees and coronations).", "title": "" }, { "paragraph_id": 2, "text": "Originally instituted to emphasise imperial unity, as time went on, the conferences became a key forum for dominion governments to assert the desire for removing the remaining vestiges of their colonial status. The conference of 1926 agreed to the Balfour Declaration, which acknowledged that the dominions would henceforth rank as equals to the United Kingdom, as members of the 'British Commonwealth of Nations'.", "title": "Notable meetings" }, { "paragraph_id": 3, "text": "The conference of 1930 decided to abolish the legislative supremacy of the British Parliament as it was expressed through the Colonial Laws Validity Act and other Imperial Acts. The statesmen recommended that a declaratory enactment of Parliament, which became the Statute of Westminster 1931, be passed with the consent of the dominions, but some dominions did not ratify the statute until some years afterwards. The 1930 conference was notable, too, for the attendance of Southern Rhodesia, despite it being a self-governing colony, not a dominion.", "title": "Notable meetings" }, { "paragraph_id": 4, "text": "As World War II drew to a close, Imperial Conferences were replaced by Commonwealth Prime Ministers' Conferences, with 17 such meetings occurring from 1944 until 1969, all but one of the meetings occurred in London. The gatherings were renamed Commonwealth Heads of Government Meetings (CHOGM) in 1971 and were henceforth held every two years with hosting duties rotating around the Commonwealth.", "title": "Towards Commonwealth meetings" }, { "paragraph_id": 5, "text": "", "title": "Further reading" } ]
Imperial Conferences were periodic gatherings of government leaders from the self-governing colonies and dominions of the British Empire between 1887 and 1937, before the establishment of regular Meetings of Commonwealth Prime Ministers in 1944. They were held in 1887, 1894, 1897, 1902, 1907, 1911, 1921, 1923, 1926, 1930, 1932 and 1937. All the conferences were held in London, the seat of the Empire, except for the 1894 and 1932 conferences which were held in Ottawa, the capital of the senior Dominion of the Crown. The 1907 conference changed the name of the meetings to Imperial Conferences and agreed that the meetings should henceforth be regular rather than taking place while overseas statesmen were visiting London for royal occasions.
2023-05-22T18:38:40Z
[ "Template:Short description", "Template:Use dmy dates", "Template:Country", "Template:Commonwealth Heads of Government Meetings", "Template:About", "Template:Use British English", "Template:Flagicon", "Template:Commons category", "Template:Reflist", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Imperial_Conference
15,395
International Refugee Organization
The International Refugee Organization (IRO) was an intergovernmental organization founded on 20 April 1946 to deal with the massive refugee problem created by World War II. A Preparatory Commission began operations fourteen months previously. In 1948, the treaty establishing the IRO formally entered into force and the IRO became a United Nations specialized agency. The IRO assumed most of the functions of the earlier United Nations Relief and Rehabilitation Administration. In 1952, operations of the IRO ceased, and it was replaced by the Office of the United Nations High Commissioner for Refugees (UNHCR). The Constitution of the International Refugee Organization, adopted by the United Nations General Assembly on 15 December 1946, is the founding document of the IRO. The constitution specified the organization's field of operations. Controversially, the constitution defined "persons of German ethnic origin" who had been expelled, or were to be expelled from their countries of birth into the postwar Germany, as individuals who would "not be the concern of the Organization." This excluded from its purview a group that exceeded in number all the other European displaced persons put together. Also, because of disagreements between the Western allies and the Soviet Union, the IRO only worked in areas controlled by Western armies of occupation. Twenty-six states became members of the IRO and it formally came into existence in 1948: Argentina, Australia, Belgium, Bolivia, Brazil, Canada, Republic of China, Chile, Denmark, the Dominican Republic, France, Guatemala, Honduras, Iceland, Italy, Liberia, Luxembourg, the Netherlands, New Zealand, Norway, Panama, Peru, the Philippines, Switzerland, the United Kingdom, the United States, and Venezuela. The U.S. provided about 40% of the IRO's $155 million annual budget. The total contribution by the members for the five years of operation was around $400 million. It had rehabilitated around 10 million people during this time, out of 15 million people who were stranded in Europe. The IRO's first Director-General was William Hallam Tuck, succeeded by J. Donald Kingsley on 31 July 1949. IRO closed its operations on 31 January 1952 and after a liquidation period, went out of existence on 30 September 1953. By that time many of its responsibilities had been assumed by other agencies. Of particular importance was the Office of the High Commissioner for Refugees, established in January 1951 as a part of the United Nations, and the Intergovernmental Committee for European Migration (originally PICMME), set up in December 1951.
[ { "paragraph_id": 0, "text": "The International Refugee Organization (IRO) was an intergovernmental organization founded on 20 April 1946 to deal with the massive refugee problem created by World War II. A Preparatory Commission began operations fourteen months previously. In 1948, the treaty establishing the IRO formally entered into force and the IRO became a United Nations specialized agency. The IRO assumed most of the functions of the earlier United Nations Relief and Rehabilitation Administration. In 1952, operations of the IRO ceased, and it was replaced by the Office of the United Nations High Commissioner for Refugees (UNHCR).", "title": "" }, { "paragraph_id": 1, "text": "The Constitution of the International Refugee Organization, adopted by the United Nations General Assembly on 15 December 1946, is the founding document of the IRO. The constitution specified the organization's field of operations. Controversially, the constitution defined \"persons of German ethnic origin\" who had been expelled, or were to be expelled from their countries of birth into the postwar Germany, as individuals who would \"not be the concern of the Organization.\" This excluded from its purview a group that exceeded in number all the other European displaced persons put together. Also, because of disagreements between the Western allies and the Soviet Union, the IRO only worked in areas controlled by Western armies of occupation.", "title": "" }, { "paragraph_id": 2, "text": "Twenty-six states became members of the IRO and it formally came into existence in 1948: Argentina, Australia, Belgium, Bolivia, Brazil, Canada, Republic of China, Chile, Denmark, the Dominican Republic, France, Guatemala, Honduras, Iceland, Italy, Liberia, Luxembourg, the Netherlands, New Zealand, Norway, Panama, Peru, the Philippines, Switzerland, the United Kingdom, the United States, and Venezuela. The U.S. provided about 40% of the IRO's $155 million annual budget. The total contribution by the members for the five years of operation was around $400 million. It had rehabilitated around 10 million people during this time, out of 15 million people who were stranded in Europe. The IRO's first Director-General was William Hallam Tuck, succeeded by J. Donald Kingsley on 31 July 1949.", "title": "" }, { "paragraph_id": 3, "text": "IRO closed its operations on 31 January 1952 and after a liquidation period, went out of existence on 30 September 1953. By that time many of its responsibilities had been assumed by other agencies. Of particular importance was the Office of the High Commissioner for Refugees, established in January 1951 as a part of the United Nations, and the Intergovernmental Committee for European Migration (originally PICMME), set up in December 1951.", "title": "" } ]
The International Refugee Organization (IRO) was an intergovernmental organization founded on 20 April 1946 to deal with the massive refugee problem created by World War II. A Preparatory Commission began operations fourteen months previously. In 1948, the treaty establishing the IRO formally entered into force and the IRO became a United Nations specialized agency. The IRO assumed most of the functions of the earlier United Nations Relief and Rehabilitation Administration. In 1952, operations of the IRO ceased, and it was replaced by the Office of the United Nations High Commissioner for Refugees (UNHCR). The Constitution of the International Refugee Organization, adopted by the United Nations General Assembly on 15 December 1946, is the founding document of the IRO. The constitution specified the organization's field of operations. Controversially, the constitution defined "persons of German ethnic origin" who had been expelled, or were to be expelled from their countries of birth into the postwar Germany, as individuals who would "not be the concern of the Organization." This excluded from its purview a group that exceeded in number all the other European displaced persons put together. Also, because of disagreements between the Western allies and the Soviet Union, the IRO only worked in areas controlled by Western armies of occupation. Twenty-six states became members of the IRO and it formally came into existence in 1948: Argentina, Australia, Belgium, Bolivia, Brazil, Canada, Republic of China, Chile, Denmark, the Dominican Republic, France, Guatemala, Honduras, Iceland, Italy, Liberia, Luxembourg, the Netherlands, New Zealand, Norway, Panama, Peru, the Philippines, Switzerland, the United Kingdom, the United States, and Venezuela. The U.S. provided about 40% of the IRO's $155 million annual budget. The total contribution by the members for the five years of operation was around $400 million. It had rehabilitated around 10 million people during this time, out of 15 million people who were stranded in Europe. The IRO's first Director-General was William Hallam Tuck, succeeded by J. Donald Kingsley on 31 July 1949. IRO closed its operations on 31 January 1952 and after a liquidation period, went out of existence on 30 September 1953. By that time many of its responsibilities had been assumed by other agencies. Of particular importance was the Office of the High Commissioner for Refugees, established in January 1951 as a part of the United Nations, and the Intergovernmental Committee for European Migration, set up in December 1951.
2022-06-17T03:07:07Z
[ "Template:Short description", "Template:Use British English", "Template:Use dmy dates", "Template:Infobox United Nations", "Template:Reflist", "Template:Cite web", "Template:ECOSOC", "Template:Authority control" ]
https://en.wikipedia.org/wiki/International_Refugee_Organization
15,396
IRO
IRO, Iro, or iro may refer to:
[ { "paragraph_id": 0, "text": "IRO, Iro, or iro may refer to:", "title": "" } ]
IRO, Iro, or iro may refer to:
2023-05-27T21:04:16Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/IRO
15,401
Isabella d'Este
Isabella d'Este (19 May 1474 – 13 February 1539) was Marchioness of Mantua and one of the leading women of the Italian Renaissance as a major cultural and political figure. She was a patron of the arts as well as a leader of fashion, whose innovative style of dressing was copied by numerous women. The poet Ariosto labeled her as the "liberal and magnanimous Isabella", while author Matteo Bandello described her as having been "supreme among women". Diplomat Niccolò da Correggio went even further by hailing her as "The First Lady of the world". She served as the regent of Mantua during the absence of her husband Francesco II Gonzaga and during the minority of her son Federico. She was a prolific letter-writer and maintained a lifelong correspondence with her sister-in-law Elisabetta Gonzaga. Isabella grew up in a cultured family in the city-state of Ferrara. She received a fine classical education and as a girl met many famous humanist scholars and artists. Due to the vast amount of extant correspondence between Isabella and her family and friends, her life is extremely well documented. Isabella was born on Tuesday, 19 May 1474 at nine o'clock in the evening Isabella's mother wrote a letter to her friend Barbara Gonzaga describing the details of Isabella's birth in Ferrara, to Ercole I d'Este, Duke of Ferrara, and Eleanor of Naples. Eleanor was the daughter of Ferdinand I, the Aragonese King of Naples, and Isabella of Clermont. One year later on 29 June 1475, her sister Beatrice was born, and in 1476 and 1477 two brothers, Alfonso and Ferrante, were born. In 1479 and 1480 two more brothers were born; Ippolito and Sigismondo. Of all the children born into the family, Isabella was considered to have been the favourite. In the year of her brother Ferrante's birth, Isabella was among the children of the family who travelled to Naples with her mother. When her mother returned to Ferrara, Isabella accompanied her, while the other two children remained in Naples for many years: Beatrice was adopted by her grandfather, and her little brother Ferrante left under the tutelage of their uncle Alfonso. Because of her outstanding intellect, she often discussed the classics and the affairs of state with ambassadors. Moreover, she was personally acquainted with the painters, musicians, writers, and scholars who lived in and around the court. Besides her knowledge of history and languages, she could also recite Virgil and Terence by heart. Isabella was also a talented singer and musician, and was taught to play the lute by Giovanni Angelo Testagrossa. In addition to all these admirable accomplishments, she was also an innovator of new dances, having been instructed in the art by Ambrogio, a Jewish dancing master. In 1480, at the age of six, Isabella was betrothed to the eight years older Francesco, the heir to the Marquess of Mantua. The Duke of Milan had requested her hand in marriage for his son, Ludovico, two weeks later. Instead, her sister, Beatrice was betrothed to Ludovico and became the Duchess of Milan. Her dowry amounted to 25,000 ducats. Although he was not handsome, Isabella admired Francesco for his strength and bravery; she also regarded him as a gentleman. After their first few encounters she found that she enjoyed his company and she spent the next few years getting to know him and preparing to be the Marchioness of Mantua. During their courtship, Isabella treasured the letters, poems, and sonnets he sent her as gifts. Ten years later on 11 February 1490, at age 15, she married Francesco by proxy. By then, he had succeeded to the marquisate. Besides being the Marquess, Francesco was captain general of the armies of the Republic of Venice. Isabella became his wife and marchioness amid a spectacular outpouring of popular acclamation and a grand celebration that took place on 15 February. She brought as her marriage portion the sum of 3,000 ducats as well as valuable jewellery, dishes, and a silver service. Prior to the magnificent banquet which followed the wedding ceremony, Isabella rode through the main streets of Ferrara astride a horse draped in gems and gold. In 1491 Isabella went with a small entourage to Brescello and from there to Pavia, to accompany her sister Beatrice who was married to Ludovico il Moro. On this occasion she saw again – since she had already known him as a child in Ferrara – Galeazzo Sanseverino, with whom she undertook a dense, and at times humorous, exchange of letters. It must be said, however, that the identity of the sender is not certain and could be the almost homonymous Galeazzo Visconti, Count of Busto Arsizio, a courtier also dear to the dukes. Between the two immediately ignited a dispute, destined to last for months, on who was the best paladin, Orlando or Rinaldo: Galeazzo supported the first, the sisters d'Este the second. Galeazzo, who exercised a strong fascination, soon managed to convert them both to Orlando's faith, but Isabella, once back in Mantua, returned to prefer Rinaldo, so that Galeazzo remembered her as "I alone was enough to make her change her mind and cry out Rolando! Rolando!", invited her to follow her sister's example and swore that he would convert her a second time, as soon as they met again. Isabella jokingly replied that she would then bring a frog to offend him, and the dispute went on for a long time. On February 11, speaking to her about the amusements he had with Beatrice, he wrote to her: "I will also strive to improve in order to give greater pleasure to the S. V., when I come for her this summer", and lamented the lack of "his sweet company". Isabella's presence was in fact much desired in Milan, not only by Galeazzo but also by her sister, Ludovico and the other courtiers, however the Marquise was able to go there a few times, as her husband Francesco was wary of sending it to her, judging that in that court too many "madness" were committed, and perhaps also out of jealousy of Ludovico. Despite the affection, Isabella began to feel envy for her sister Beatrice, first for the very fortunate marriage that had touched her and for the enormous riches, then for the two sons in perfect health who were born to her a short time later, while she seemed unable to have children, and in this aroused the concerns of her mother Eleonora, who continually exhorted her in letters to be as close as possible to her husband. A certain hatred can also be seen in a letter to his mother dating back to his visit to Pavia in August 1492, when, speaking of Beatrice, he wrote: "she is not a greater than me, but she is much bigger!"; in a similar way she also expressed herself to her husband, not being able yet to know, perhaps, that the sister's coarseness was due to the incipient pregnancy (she was at the fourth-fifth month). These frictions were perhaps also linked to the fact that Ludovico had initially asked for Isabella's hand, in 1480, and that this had not been possible because, only a few days earlier, Duke Ercole had officially promised it to Francesco Gonzaga. Despite everything, in 1492 she was very close to Beatrice in a difficult moment of her pregnancy, that is when she was suddenly struck by an attack of malarial fevers, and in 1495 she went again to Milan to assist her sister in her second birth and also baptized her nephew Francesco. In the summer of 1494, on the occasion of the descent of the French into Italy, Beatrice invited her sister to Milan to kiss Gilbert of Montpensier and others of the royal house, according to the custom French. Secretary Benedetto Capilupi reported: The Duchess says that when the Duke of Orliens came, she had to dress colorfully, dance and be kissed by the Duke, who wanted to kiss all the bridesmaids and women of account. [...] Coming Count Delfino or someone else of royal blood, the Duchess invites the S.V. to take these little kisses In fact, it does not seem that Beatrice had any conflicting feelings towards Isabella, nor that she saw with a bad eye the complicity between the latter and her husband Ludovico. The Moro in fact, who was of generous nature, often gave Isabella even very expensive gifts: once he sent her fifteen arms of a fabric so precious as to cost forty ducats on her arm – an amazing sum – saying that he had already made a dress for Beatrice. After the death of his wife, which took place in 1497, Ludovico came to allude to a secret relationship with Isabella, claiming that it was out of jealousy of his wife that the Marquis Francesco played a double game between him and the Lordship of Venice. The rumor was however promptly denied by his father Ercole. Others instead defined Beatrice's attitude towards her sister as "complexed second child" because in the letter of congratulations to Isabella for the birth of little Eleonora - who, being female, incredibly disappointed her mother - she added the greetings of her little son Hercules to "soa cusina", despite not having the child yet turned one year of age, something that historians such as Luciano Chiappini interpreted as a sort of mockery, of "refined malice", "a slap given with grace and grace". In fact, if Isabella was always the daughter most loved by her parents, Beatrice had been ceded to her grandfather, and only with the birth of the firstborn had she obtained her own revenge. Other mischief between sisters dates back to the weeks immediately following the battle of Fornovo: Beatrice, who was at the siege of Novara together with the Marquis Francesco, wanted to see the booty stolen from the tent of King Charles VIII during the battle, booty that however Francesco had already sent to his wife in Mantua. He wrote to his wife to give it to his sister-in-law, but Isabella replied that she was not so willing to cede this honor to her sister and, with the excuse that she lacked a mule, begged her husband to invent some expedient. Beatrice replied that it was not her intention to steal the booty from her sister, but that she only wanted to see it all together and then return it to her. Meanwhile, it occurred to her to procure "a femina de partito", that is, a high-ranking prostitute, to Francis, saying to do it "for a good cause and to avoid greater evil", that is to say to preserve her brother-in-law and sister from the terrible malfrancese, but perhaps also to ingratiate herself with him. In October Francis wrote to his wife sorry that she was not there with them to see the army before it was disbanded, but it does not seem that he had urged her to come, probably because he had at heart his safety (the camps were dangerous places, where violent fights often broke out, and Beatrice herself had been saved on one occasion by Francis, when she risked being raped by a few thousand Alemannic mercenaries). Moreover, Isabella had already had a mishap with some Genoese soldiers who, upon entering the city in 1492, surrounded her to appropriate her mount and harness, according to custom. So she later told her husband: "I was never more afraid; and they tore all the harness to pieces, and took off the bridle before I could dismount, despite the fact that the governor interposed him and that I voluntarily offered it to him. I lost heart, although among so many partisans I was afraid of some misfortune. Finally, helped, I freed myself from their hands ". Having also received different educations, the two sisters were the opposite of each other: Isabella, more like her mother, was sweet, graceful and a lover of tranquility; Beatrice, more like her father, was impetuous, adventurous and aggressive. Beatrice loved to shoot crossbow, Isabella had "the hand so light that we cannot play well [the clavichord], when we have to strain it for the hardness of the keys". However, they were united by the desire to excel in everything. In the last two hundred years historians and writers were divided in preference for one or the other: many - such as Francesco Malaguzzi Valeri and Maria Bellonci - regretted that Ludovico had not, only briefly, married Isabella, fantasizing about the splendors that Isabella would be able to bring to Milan, in conditions of greater well-being than to Mantua, and how he could distract the Moro from his perverse policy. These judgments were not separated from a blatant contempt for the second daughter, as in the case of Alessandro Luzio, who writes: "The luck that made play of this Sforza, making him pass from the brightest heights to the darkest abysses of misery, had in April 1480 exchanged a beneficial star for a sinister meteor". In truth, other historians, including Rodolfo Renier himself, Luzio's colleague, judged that Beatrice was the most suitable wife for Ludovico, since she knew, with her own audacity, to instill courage in her insecure consort, and acquired political depth already in her early youth, so much so as to be decisive in situations of greatest danger, while Isabella could boast a role in this sense only in the years of maturity. The different fate of the two sisters certainly weighed in these judgments: Isabella lived sixty-five years, Beatrice died at twenty-one. It was from this tragic loss, for which she proved inconsolable, that Isabella undertook to support her brother-in-law's cause with her husband Francesco, who was against him. So he continued to do until the fall of the Sforza, in 1499, when he suddenly changed sides and declared himself to be "good French". As the couple had known and admired one another for many years, their mutual attraction deepened into love. Reportedly, marriage to Francesco caused Isabella to "bloom". At the time of her wedding, Isabella was said to have been pretty, slim, graceful, and well-dressed. Her long, fine hair was dyed a fashionable pale blonde and her eyes were described as "brown as fir cones in autumn, scattered laughter". Almost four years after her marriage in December 1493, Isabella gave birth to her first child out of an eventual total of eight. She was a daughter, Eleonora, whom they called Leonora for short, after Isabella's mother, Eleonora of Naples. Isabella's relationship with her husband over the years often proved to be tense, at times very tense, both for the political differences between the two and for the difficulty in procreating a male heir. In truth, Francesco for his part was always very proud of his daughters and never showed himself disappointed, indeed from the beginning he declared himself in love with the firstborn Eleonora, despite the absolute disappointment of Isabella who refused her daughter, who was then very lovingly educated by her sister-in-law Elisabetta, who because of her husband's impotence never had children. When in 1496 the second daughter Margherita was born, Isabella was so angry that she wrote to her husband, who was then fighting the French in Calabria, a letter in which she blamed him, declaring that she did nothing but reap the fruits of his sown. Francis replied that he was instead very happy with the birth of his daughter – who, however, did not have time to know, having died in swaddling clothes – and indeed forbade anyone to show discontent with it. Only in 1500 was finally born the long-awaited son Federico, who was the most loved by Isabella. In his capacity of captain general of the Venetian armies, Francesco often was required to go to Venice for conferences that left Isabella in Mantua on her own at La Reggia, the ancient palace that was the family seat of the Gonzagas. She did not lack company, however, as she passed the time with her mother and with her sister, Beatrice. Upon meeting Elisabetta Gonzaga, her 18-year-old sister-in-law, the two women became close friends. They enjoyed reading books, playing cards, and travelling about the countryside together. Once they journeyed as far as Lake Garda during one of Francesco's absences, and later travelled to Venice. They maintained a steady correspondence until Elisabetta's death in 1526. Isabella had met the French king in Milan in 1500 on a successful diplomatic mission that she had undertaken to protect Mantua from French invasion. Louis had been impressed by her alluring personality and keen intelligence. It was while she was being entertained by Louis, whose troops occupied Milan, that she offered asylum to Milanese refugees including Cecilia Gallerani, the refined mistress of her sister Beatrice's husband, Ludovico Sforza, Duke of Milan, who had been forced to leave his duchy in the wake of French occupation. Isabella presented Cecilia to King Louis, describing her as a "lady of rare gifts and charm". A year after her 1502 marriage to Isabella's brother Alfonso, the notorious Lucrezia Borgia became the mistress of Francesco. At about the same time, Isabella had given birth to a daughter, Ippolita, and she continued to bear him children throughout Francesco and Lucrezia's long, passionate affair, which was more sexual than romantic. Lucrezia had previously made overtures of friendship to Isabella which the latter had coldly and disdainfully ignored. From the time Lucrezia had first arrived in Ferrara as Alfonso's intended bride, Isabella, despite having acted as hostess during the wedding festivities, had regarded Lucrezia as a rival, whom she sought to outdo at every opportunity. Francesco's affair with Lucrezia, whose beauty was renowned, caused Isabella much jealous suffering and emotional pain. The liaison ended when he contracted syphilis as a result of encounters with prostitutes. Isabella played an important role in Mantua during troubled times for the city. When her husband was captured in 1509 and held hostage in Venice, she took control of Mantua's military forces and held off the invaders until his release in 1512. In the same year, 1512, she was the hostess at the Congress of Mantua, which was held to settle questions concerning Florence and Milan. As a ruler, she appeared to have been much more assertive and competent than her husband. When apprised of this fact upon his return, Francesco was furious and humiliated at being surpassed by his wife's superior political ability. This caused their marriage to break down irrevocably. As a result, Isabella began to travel freely and live independently from her husband until his death on 19 March 1519. After the death of her husband, Isabella ruled Mantua as regent for her son Federico. She began to play an increasingly important role in Italian politics, steadily advancing Mantua's position. She was instrumental in promoting Mantua to a Duchy, which was obtained by wise diplomatic use of her son's marriage contracts. She also succeeded in obtaining a cardinalate for her son Ercole. She further displayed shrewd political acumen in her negotiations with Cesare Borgia, who had dispossessed Guidobaldo da Montefeltro, duke of Urbino, the husband of her sister-in-law and good friend Elisabetta Gonzaga in 1502. As a widow, Isabella at the age of 45 became a "devoted head of state". Her position as a Marquise required her serious attention, therefore she was required to study the problems faced by a ruler of a city-state. To improve the well-being of her subjects she studied architecture, agriculture, and industry, and followed the principles that Niccolò Machiavelli had set forth for rulers in his book The Prince. In return, the people of Mantua respected and loved her. Isabella left Mantua for Rome in 1527. She was present during the catastrophic Sack of Rome, when she converted her house the Palazzo Colonna, into an asylum for approximately 2,000 people (including clerics, nobles and common citizens) fleeing the Imperial soldiers. Her huge place was the only place safe from attacks, because her son Ferrante Gonzaga was a general in the invading army and she herself had good relationship with the emperor. When she left Rome, she managed to acquire safe passage for all the refugees who had sought refuge in her home. Once Rome became stabilized following the sacking, she left the city and returned to Mantua. She made it a centre of culture, started a school for girls, and turned her ducal apartments into a museum containing the finest art treasures. This was not enough to satisfy Isabella, already in her mid-sixties, so she returned to political life and ruled Solarolo, in Romagna until her death on 13 February 1539. She was buried beside her husband in the Church of Santa Paola in Mantua, but the remains were stolen. Isabella’s appearance was frequently written about in her lifetime. Mario Equicola said that “her eyes were black and sparkling, her hair yellow, and her complexion one of dazzling brilliancy.” Similarly Gian Giorgio Trissino’s I Ritratti has a fictionalized Pietro Bembo describe Isabella’s “rippling golden hair that flowed in thick masses over her shoulders,” in a passage that, according to art historian Sally Hickson, identifies Isabella as the “living paragon of female beauty.” The real Bembo praised Isabella’s “beautiful and charming hand and pure, sweet voice” in a letter addressed to her. The alleged beauty of Isabella attracted the attention of the king of France, Charles VIII, who asked the chaplain Bernardino of Urbino about her features and attempted to arrange a meeting with her. However, this meeting never took place as shortly after he returned to France. Isabella herself frequently diminished her own appearance; commenting on his portrait she told Francia that he had “made us far more beautiful by your art than nature ever made us.” Likewise she told Trissino that “your praises of us far exceed the truth”, and said of Titian’s portrait that “we doubt that at the age he represents us we were ever of the beauty it contains.” In 1534, in the same year that Titian’s portrait was painted, Titian’s friend, Pietro Aretino, mocked her appearance, calling her “the monstrous Marchioness of Mantua, with ebony teeth and ivory eyelashes, dishonestly ugly and ultra-dishonestly tarted up.” Despite her desplays of modesty, Isabella was also known to lose herself infront of a mirror. Isabella was worried about her weight from an early age. As an adult she discussed her weight with those close to her frequently. In 1499 she sent a portrait by Giovanni Santi to her brother Ludovico Sforza, complaining that it did not resemble her very much “for being a little fatter than me.” Ludovico replied that he liked the portrait very much of her and that it was very similar to her, although "somewhat more fat", unless Isabella had "grown fatter after we saw her." In 1509 she complained to her husband that “if she had more to do with running the state she would not have grown fat”, while in 1511 her sister Lucrezia complained about an early draft of the Francia portrait that made her look too thin. Her face became damaged and prematurely aged by Venetian ceruse. During her lifetime and after her death, poets, popes, and statesmen paid tribute to Isabella. Pope Leo X invited her to treat him with "as much friendliness as you would your brother". The latter's secretary Pietro Bembo described her as "one of the wisest and most fortunate of women". The poet Ariosto deemed her the "liberal and magnanimous Isabella". Author Matteo Bandello wrote that she was "supreme among women", and the diplomat Niccolò da Correggio entitled her "The First Lady of the world". Judgments less imbued with praise, indeed very harsh, were instead expressed by Pope Julius II, a man of corrupt morals, in disagreement with Isabella's conduct, even went so far as to call her "that ribald whore". A not dissimilar judgment had also expressed her husband Francesco himself who, now a prisoner of the Venetians, accused his wife of not loving him and of having indeed been the cause of his ruin, referring to her by letter as "that whore of my wife". Isabella d'Este is famous as the most important art patron of the Renaissance; her life is documented by her correspondence, which remains archived in Mantua (approximately 28,000 letters received and copies of approximately 12,000 letters written). In painting she had numerous famous artists of the time work for her, including Giovanni Bellini, Giorgione, Leonardo da Vinci, Andrea Mantegna (court painter until 1506), Perugino, Raphael, Titian, Antonio da Correggio, Lorenzo Costa (court painter from 1509), Dosso Dossi, Francesco Francia, Giulio Romano, and many others. For instance her 'Studiolo' in the Ducal Palace, Mantua, was decorated with allegories by Mantegna, Perugino, Costa, and Correggio. It is suggested that she requested a specific painting for her studio from da Vinci that is known as La Scapigliata and may have given it to her son Federico II on the occasion of his wedding to Margaret Paleologa. In parallel she contracted the most important sculptors and medallists of her time, i.e. Michelangelo, Pier Jacopo Alari Bonacolsi (L'Antico), Gian Cristoforo Romano, and Tullio Lombardo. She also collected ancient Roman art. For what concerns writers, she was in contact with Pietro Aretino, Ludovico Ariosto, Pietro Bembo, Baldassare Castiglione, Mario Equicola, Gian Giorgio Trissino, and others. In music Isabella sponsored the composers Bartolomeo Tromboncino and Marco Cara and she played the lute. Unusually, she employed women as professional singers at her court, including Giovanna Moreschi, the wife of Marchetto Cara. In the architecture field, she could not afford new palaces, however she commissioned architects such as Biagio Rossetti and Battista Covo. She was also considered an icon of her time in fashion. Famous is her Balzo as headwear – documented as her invention in letters circa 1509 and visible several times in portraits of other ladies in the 1520s/30s. Despite her significant art patronage that included a number of portraits, there are very few surviving portraits that may be identified as Isabella, especially when compared to her sister Beatrice. It is known that the elderly Isabella preferred idealized paintings and even waived sitting as a model. However, it may be presumed that she insisted nonetheless on seeing her personal characteristics in the outcome. These few identifications are known as inhomogeneous (i.e. differing eye and hair colours as well as divergent eyebrows in two Titian portraits) and there are no known images of her between the ages of 26 and 54. In 1495 she refused with absolute rigor to pose for Mantegna in the Madonna della Vittoria – where her figure was provided next to that of her husband – since in the past the painter had portrayed her "so badly done" – in a painting that in fact has not survived – "which has none of my similarities". However, the negative judgment of the Marquise was not due to Mantegna's inability to portray her similar to the truth, as she herself writes, but to the opposite lack: of not knowing how to "well counterfeit the natural", that is idealize. Her husband Francesco had to pose alone and Mantegna remedied the disturbance of the symmetry by painting, in place of the Marquise, St. Elizabeth, his eponymous saint. In recent years several museums have withdrawn their few identifications of portraits as Isabella because of concern about possible misidentification. The remaining three colourful portraits are still inhomogeneous (Kunsthistorisches Museum/KHM, Vienna): La Bella (now in Palazzo Pitti, Florence) has been discussed as an alternative to Titian's 1536 portrait in Vienna, because the commission from the 60-year-old patron was for a rejuvenated portrait; if La Bella were Isabella, eye colour, hair colour, eyebrows, and general appearance would homogenize in all known portraits, allowing potential links toward further identifications. As of 2021, the 1495 medal by Gian Cristoforo Romano (several extant copies) is the only reliable identification because of the inscription created during Isabella's lifetime. Idealised portraits still show characteristics of the person. The following characteristics can be derived (characteristics of the disputed Isabella in Black are excluded): In the current catalogue raisonné of Leonardo da Vinci (2019), only Isabella d'Este is documented as a plausible alternative as the subject of Leonardo's Mona Lisa, usually considered a portrait of Lisa del Giocondo. Lisa was the wife of a merchant in Florence and Giorgio Vasari wrote of her portrait by Leonardo, – in debate that persists about whether this is the portrait now known as the Mona Lisa. Evidence in favor of Isabella as the subject of the famous work includes Leonardo's drawing 'Isabella d'Este' from 1499 and her letters of 1501–1506 requesting the promised painted portrait. Further arguments focus upon the mountains in the background indicating the native origin of the subject, and the armrest in the painting as a Renaissance symbol used to identify a portrait as that of a sovereign. The Louvre's reservation is that Isabella would be a "blonde", a feature that exists only in the widely circulated but uncertain representation Isabella in Black. Together Isabella and Francesco had eight children: Correspondence exchanged by Isabella documents the Renaissance European tendency to perceive black African slaves as exotic. Isabella's pursuit of a black child as a servant is extensively documented. On 1 May 1491 Isabella asked Giorgio Brognolo, her agent in Venice, to procure a young black girl ('una moreta') between the ages of one-and-a-half and four, and twice in early June reminded him of the request, emphasizing that the girl should be 'as black as possible'. Isabella's household and financial records reflect that she already had a significantly older black girl in her service when she inquired after a younger black child. Records also reflect that she obtained a little black girl from a Venetian orphanage. She opened negotiations with a Venetian patrician household for the sale of a little black boy and purchased an enslaved little black girl from her sister. The artwork The Dinner Party by Judy Chicago features a place setting for Isabella d'Este. Isabella d'Este was portrayed by Belgian actress Alexandra Oppo in the television show Borgia (2011–2014).
[ { "paragraph_id": 0, "text": "Isabella d'Este (19 May 1474 – 13 February 1539) was Marchioness of Mantua and one of the leading women of the Italian Renaissance as a major cultural and political figure.", "title": "" }, { "paragraph_id": 1, "text": "She was a patron of the arts as well as a leader of fashion, whose innovative style of dressing was copied by numerous women. The poet Ariosto labeled her as the \"liberal and magnanimous Isabella\", while author Matteo Bandello described her as having been \"supreme among women\". Diplomat Niccolò da Correggio went even further by hailing her as \"The First Lady of the world\".", "title": "" }, { "paragraph_id": 2, "text": "She served as the regent of Mantua during the absence of her husband Francesco II Gonzaga and during the minority of her son Federico. She was a prolific letter-writer and maintained a lifelong correspondence with her sister-in-law Elisabetta Gonzaga. Isabella grew up in a cultured family in the city-state of Ferrara. She received a fine classical education and as a girl met many famous humanist scholars and artists. Due to the vast amount of extant correspondence between Isabella and her family and friends, her life is extremely well documented.", "title": "" }, { "paragraph_id": 3, "text": "Isabella was born on Tuesday, 19 May 1474 at nine o'clock in the evening Isabella's mother wrote a letter to her friend Barbara Gonzaga describing the details of Isabella's birth in Ferrara, to Ercole I d'Este, Duke of Ferrara, and Eleanor of Naples. Eleanor was the daughter of Ferdinand I, the Aragonese King of Naples, and Isabella of Clermont.", "title": "Early life" }, { "paragraph_id": 4, "text": "One year later on 29 June 1475, her sister Beatrice was born, and in 1476 and 1477 two brothers, Alfonso and Ferrante, were born. In 1479 and 1480 two more brothers were born; Ippolito and Sigismondo. Of all the children born into the family, Isabella was considered to have been the favourite.", "title": "Early life" }, { "paragraph_id": 5, "text": "In the year of her brother Ferrante's birth, Isabella was among the children of the family who travelled to Naples with her mother. When her mother returned to Ferrara, Isabella accompanied her, while the other two children remained in Naples for many years: Beatrice was adopted by her grandfather, and her little brother Ferrante left under the tutelage of their uncle Alfonso.", "title": "Early life" }, { "paragraph_id": 6, "text": "Because of her outstanding intellect, she often discussed the classics and the affairs of state with ambassadors. Moreover, she was personally acquainted with the painters, musicians, writers, and scholars who lived in and around the court. Besides her knowledge of history and languages, she could also recite Virgil and Terence by heart. Isabella was also a talented singer and musician, and was taught to play the lute by Giovanni Angelo Testagrossa. In addition to all these admirable accomplishments, she was also an innovator of new dances, having been instructed in the art by Ambrogio, a Jewish dancing master.", "title": "Early life" }, { "paragraph_id": 7, "text": "In 1480, at the age of six, Isabella was betrothed to the eight years older Francesco, the heir to the Marquess of Mantua. The Duke of Milan had requested her hand in marriage for his son, Ludovico, two weeks later. Instead, her sister, Beatrice was betrothed to Ludovico and became the Duchess of Milan. Her dowry amounted to 25,000 ducats. Although he was not handsome, Isabella admired Francesco for his strength and bravery; she also regarded him as a gentleman. After their first few encounters she found that she enjoyed his company and she spent the next few years getting to know him and preparing to be the Marchioness of Mantua. During their courtship, Isabella treasured the letters, poems, and sonnets he sent her as gifts.", "title": "Early life" }, { "paragraph_id": 8, "text": "Ten years later on 11 February 1490, at age 15, she married Francesco by proxy. By then, he had succeeded to the marquisate. Besides being the Marquess, Francesco was captain general of the armies of the Republic of Venice. Isabella became his wife and marchioness amid a spectacular outpouring of popular acclamation and a grand celebration that took place on 15 February. She brought as her marriage portion the sum of 3,000 ducats as well as valuable jewellery, dishes, and a silver service. Prior to the magnificent banquet which followed the wedding ceremony, Isabella rode through the main streets of Ferrara astride a horse draped in gems and gold.", "title": "Early life" }, { "paragraph_id": 9, "text": "In 1491 Isabella went with a small entourage to Brescello and from there to Pavia, to accompany her sister Beatrice who was married to Ludovico il Moro. On this occasion she saw again – since she had already known him as a child in Ferrara – Galeazzo Sanseverino, with whom she undertook a dense, and at times humorous, exchange of letters. It must be said, however, that the identity of the sender is not certain and could be the almost homonymous Galeazzo Visconti, Count of Busto Arsizio, a courtier also dear to the dukes.", "title": "Early life" }, { "paragraph_id": 10, "text": "Between the two immediately ignited a dispute, destined to last for months, on who was the best paladin, Orlando or Rinaldo: Galeazzo supported the first, the sisters d'Este the second. Galeazzo, who exercised a strong fascination, soon managed to convert them both to Orlando's faith, but Isabella, once back in Mantua, returned to prefer Rinaldo, so that Galeazzo remembered her as \"I alone was enough to make her change her mind and cry out Rolando! Rolando!\", invited her to follow her sister's example and swore that he would convert her a second time, as soon as they met again. Isabella jokingly replied that she would then bring a frog to offend him, and the dispute went on for a long time.", "title": "Early life" }, { "paragraph_id": 11, "text": "On February 11, speaking to her about the amusements he had with Beatrice, he wrote to her: \"I will also strive to improve in order to give greater pleasure to the S. V., when I come for her this summer\", and lamented the lack of \"his sweet company\". Isabella's presence was in fact much desired in Milan, not only by Galeazzo but also by her sister, Ludovico and the other courtiers, however the Marquise was able to go there a few times, as her husband Francesco was wary of sending it to her, judging that in that court too many \"madness\" were committed, and perhaps also out of jealousy of Ludovico.", "title": "Early life" }, { "paragraph_id": 12, "text": "Despite the affection, Isabella began to feel envy for her sister Beatrice, first for the very fortunate marriage that had touched her and for the enormous riches, then for the two sons in perfect health who were born to her a short time later, while she seemed unable to have children, and in this aroused the concerns of her mother Eleonora, who continually exhorted her in letters to be as close as possible to her husband. A certain hatred can also be seen in a letter to his mother dating back to his visit to Pavia in August 1492, when, speaking of Beatrice, he wrote: \"she is not a greater than me, but she is much bigger!\"; in a similar way she also expressed herself to her husband, not being able yet to know, perhaps, that the sister's coarseness was due to the incipient pregnancy (she was at the fourth-fifth month). These frictions were perhaps also linked to the fact that Ludovico had initially asked for Isabella's hand, in 1480, and that this had not been possible because, only a few days earlier, Duke Ercole had officially promised it to Francesco Gonzaga.", "title": "Early life" }, { "paragraph_id": 13, "text": "Despite everything, in 1492 she was very close to Beatrice in a difficult moment of her pregnancy, that is when she was suddenly struck by an attack of malarial fevers, and in 1495 she went again to Milan to assist her sister in her second birth and also baptized her nephew Francesco.", "title": "Early life" }, { "paragraph_id": 14, "text": "In the summer of 1494, on the occasion of the descent of the French into Italy, Beatrice invited her sister to Milan to kiss Gilbert of Montpensier and others of the royal house, according to the custom French. Secretary Benedetto Capilupi reported:", "title": "Early life" }, { "paragraph_id": 15, "text": "The Duchess says that when the Duke of Orliens came, she had to dress colorfully, dance and be kissed by the Duke, who wanted to kiss all the bridesmaids and women of account. [...] Coming Count Delfino or someone else of royal blood, the Duchess invites the S.V. to take these little kisses", "title": "Early life" }, { "paragraph_id": 16, "text": "In fact, it does not seem that Beatrice had any conflicting feelings towards Isabella, nor that she saw with a bad eye the complicity between the latter and her husband Ludovico. The Moro in fact, who was of generous nature, often gave Isabella even very expensive gifts: once he sent her fifteen arms of a fabric so precious as to cost forty ducats on her arm – an amazing sum – saying that he had already made a dress for Beatrice.", "title": "Early life" }, { "paragraph_id": 17, "text": "After the death of his wife, which took place in 1497, Ludovico came to allude to a secret relationship with Isabella, claiming that it was out of jealousy of his wife that the Marquis Francesco played a double game between him and the Lordship of Venice. The rumor was however promptly denied by his father Ercole.", "title": "Early life" }, { "paragraph_id": 18, "text": "Others instead defined Beatrice's attitude towards her sister as \"complexed second child\" because in the letter of congratulations to Isabella for the birth of little Eleonora - who, being female, incredibly disappointed her mother - she added the greetings of her little son Hercules to \"soa cusina\", despite not having the child yet turned one year of age, something that historians such as Luciano Chiappini interpreted as a sort of mockery, of \"refined malice\", \"a slap given with grace and grace\". In fact, if Isabella was always the daughter most loved by her parents, Beatrice had been ceded to her grandfather, and only with the birth of the firstborn had she obtained her own revenge.", "title": "Early life" }, { "paragraph_id": 19, "text": "Other mischief between sisters dates back to the weeks immediately following the battle of Fornovo: Beatrice, who was at the siege of Novara together with the Marquis Francesco, wanted to see the booty stolen from the tent of King Charles VIII during the battle, booty that however Francesco had already sent to his wife in Mantua. He wrote to his wife to give it to his sister-in-law, but Isabella replied that she was not so willing to cede this honor to her sister and, with the excuse that she lacked a mule, begged her husband to invent some expedient. Beatrice replied that it was not her intention to steal the booty from her sister, but that she only wanted to see it all together and then return it to her. Meanwhile, it occurred to her to procure \"a femina de partito\", that is, a high-ranking prostitute, to Francis, saying to do it \"for a good cause and to avoid greater evil\", that is to say to preserve her brother-in-law and sister from the terrible malfrancese, but perhaps also to ingratiate herself with him. In October Francis wrote to his wife sorry that she was not there with them to see the army before it was disbanded, but it does not seem that he had urged her to come, probably because he had at heart his safety (the camps were dangerous places, where violent fights often broke out, and Beatrice herself had been saved on one occasion by Francis, when she risked being raped by a few thousand Alemannic mercenaries).", "title": "Early life" }, { "paragraph_id": 20, "text": "Moreover, Isabella had already had a mishap with some Genoese soldiers who, upon entering the city in 1492, surrounded her to appropriate her mount and harness, according to custom. So she later told her husband: \"I was never more afraid; and they tore all the harness to pieces, and took off the bridle before I could dismount, despite the fact that the governor interposed him and that I voluntarily offered it to him. I lost heart, although among so many partisans I was afraid of some misfortune. Finally, helped, I freed myself from their hands \".", "title": "Early life" }, { "paragraph_id": 21, "text": "Having also received different educations, the two sisters were the opposite of each other: Isabella, more like her mother, was sweet, graceful and a lover of tranquility; Beatrice, more like her father, was impetuous, adventurous and aggressive. Beatrice loved to shoot crossbow, Isabella had \"the hand so light that we cannot play well [the clavichord], when we have to strain it for the hardness of the keys\". However, they were united by the desire to excel in everything.", "title": "Early life" }, { "paragraph_id": 22, "text": "In the last two hundred years historians and writers were divided in preference for one or the other: many - such as Francesco Malaguzzi Valeri and Maria Bellonci - regretted that Ludovico had not, only briefly, married Isabella, fantasizing about the splendors that Isabella would be able to bring to Milan, in conditions of greater well-being than to Mantua, and how he could distract the Moro from his perverse policy. These judgments were not separated from a blatant contempt for the second daughter, as in the case of Alessandro Luzio, who writes: \"The luck that made play of this Sforza, making him pass from the brightest heights to the darkest abysses of misery, had in April 1480 exchanged a beneficial star for a sinister meteor\".", "title": "Early life" }, { "paragraph_id": 23, "text": "In truth, other historians, including Rodolfo Renier himself, Luzio's colleague, judged that Beatrice was the most suitable wife for Ludovico, since she knew, with her own audacity, to instill courage in her insecure consort, and acquired political depth already in her early youth, so much so as to be decisive in situations of greatest danger, while Isabella could boast a role in this sense only in the years of maturity. The different fate of the two sisters certainly weighed in these judgments: Isabella lived sixty-five years, Beatrice died at twenty-one. It was from this tragic loss, for which she proved inconsolable, that Isabella undertook to support her brother-in-law's cause with her husband Francesco, who was against him. So he continued to do until the fall of the Sforza, in 1499, when he suddenly changed sides and declared himself to be \"good French\".", "title": "Early life" }, { "paragraph_id": 24, "text": "As the couple had known and admired one another for many years, their mutual attraction deepened into love. Reportedly, marriage to Francesco caused Isabella to \"bloom\". At the time of her wedding, Isabella was said to have been pretty, slim, graceful, and well-dressed. Her long, fine hair was dyed a fashionable pale blonde and her eyes were described as \"brown as fir cones in autumn, scattered laughter\". Almost four years after her marriage in December 1493, Isabella gave birth to her first child out of an eventual total of eight. She was a daughter, Eleonora, whom they called Leonora for short, after Isabella's mother, Eleonora of Naples.", "title": "Early life" }, { "paragraph_id": 25, "text": "Isabella's relationship with her husband over the years often proved to be tense, at times very tense, both for the political differences between the two and for the difficulty in procreating a male heir. In truth, Francesco for his part was always very proud of his daughters and never showed himself disappointed, indeed from the beginning he declared himself in love with the firstborn Eleonora, despite the absolute disappointment of Isabella who refused her daughter, who was then very lovingly educated by her sister-in-law Elisabetta, who because of her husband's impotence never had children. When in 1496 the second daughter Margherita was born, Isabella was so angry that she wrote to her husband, who was then fighting the French in Calabria, a letter in which she blamed him, declaring that she did nothing but reap the fruits of his sown. Francis replied that he was instead very happy with the birth of his daughter – who, however, did not have time to know, having died in swaddling clothes – and indeed forbade anyone to show discontent with it.", "title": "Early life" }, { "paragraph_id": 26, "text": "Only in 1500 was finally born the long-awaited son Federico, who was the most loved by Isabella.", "title": "Early life" }, { "paragraph_id": 27, "text": "In his capacity of captain general of the Venetian armies, Francesco often was required to go to Venice for conferences that left Isabella in Mantua on her own at La Reggia, the ancient palace that was the family seat of the Gonzagas. She did not lack company, however, as she passed the time with her mother and with her sister, Beatrice. Upon meeting Elisabetta Gonzaga, her 18-year-old sister-in-law, the two women became close friends. They enjoyed reading books, playing cards, and travelling about the countryside together. Once they journeyed as far as Lake Garda during one of Francesco's absences, and later travelled to Venice. They maintained a steady correspondence until Elisabetta's death in 1526.", "title": "Early life" }, { "paragraph_id": 28, "text": "Isabella had met the French king in Milan in 1500 on a successful diplomatic mission that she had undertaken to protect Mantua from French invasion. Louis had been impressed by her alluring personality and keen intelligence. It was while she was being entertained by Louis, whose troops occupied Milan, that she offered asylum to Milanese refugees including Cecilia Gallerani, the refined mistress of her sister Beatrice's husband, Ludovico Sforza, Duke of Milan, who had been forced to leave his duchy in the wake of French occupation. Isabella presented Cecilia to King Louis, describing her as a \"lady of rare gifts and charm\".", "title": "Early life" }, { "paragraph_id": 29, "text": "A year after her 1502 marriage to Isabella's brother Alfonso, the notorious Lucrezia Borgia became the mistress of Francesco. At about the same time, Isabella had given birth to a daughter, Ippolita, and she continued to bear him children throughout Francesco and Lucrezia's long, passionate affair, which was more sexual than romantic. Lucrezia had previously made overtures of friendship to Isabella which the latter had coldly and disdainfully ignored. From the time Lucrezia had first arrived in Ferrara as Alfonso's intended bride, Isabella, despite having acted as hostess during the wedding festivities, had regarded Lucrezia as a rival, whom she sought to outdo at every opportunity. Francesco's affair with Lucrezia, whose beauty was renowned, caused Isabella much jealous suffering and emotional pain. The liaison ended when he contracted syphilis as a result of encounters with prostitutes.", "title": "Early life" }, { "paragraph_id": 30, "text": "Isabella played an important role in Mantua during troubled times for the city. When her husband was captured in 1509 and held hostage in Venice, she took control of Mantua's military forces and held off the invaders until his release in 1512. In the same year, 1512, she was the hostess at the Congress of Mantua, which was held to settle questions concerning Florence and Milan. As a ruler, she appeared to have been much more assertive and competent than her husband. When apprised of this fact upon his return, Francesco was furious and humiliated at being surpassed by his wife's superior political ability. This caused their marriage to break down irrevocably. As a result, Isabella began to travel freely and live independently from her husband until his death on 19 March 1519.", "title": "Early life" }, { "paragraph_id": 31, "text": "After the death of her husband, Isabella ruled Mantua as regent for her son Federico. She began to play an increasingly important role in Italian politics, steadily advancing Mantua's position. She was instrumental in promoting Mantua to a Duchy, which was obtained by wise diplomatic use of her son's marriage contracts. She also succeeded in obtaining a cardinalate for her son Ercole. She further displayed shrewd political acumen in her negotiations with Cesare Borgia, who had dispossessed Guidobaldo da Montefeltro, duke of Urbino, the husband of her sister-in-law and good friend Elisabetta Gonzaga in 1502.", "title": "Early life" }, { "paragraph_id": 32, "text": "As a widow, Isabella at the age of 45 became a \"devoted head of state\". Her position as a Marquise required her serious attention, therefore she was required to study the problems faced by a ruler of a city-state. To improve the well-being of her subjects she studied architecture, agriculture, and industry, and followed the principles that Niccolò Machiavelli had set forth for rulers in his book The Prince. In return, the people of Mantua respected and loved her.", "title": "Early life" }, { "paragraph_id": 33, "text": "Isabella left Mantua for Rome in 1527. She was present during the catastrophic Sack of Rome, when she converted her house the Palazzo Colonna, into an asylum for approximately 2,000 people (including clerics, nobles and common citizens) fleeing the Imperial soldiers. Her huge place was the only place safe from attacks, because her son Ferrante Gonzaga was a general in the invading army and she herself had good relationship with the emperor. When she left Rome, she managed to acquire safe passage for all the refugees who had sought refuge in her home.", "title": "Early life" }, { "paragraph_id": 34, "text": "Once Rome became stabilized following the sacking, she left the city and returned to Mantua. She made it a centre of culture, started a school for girls, and turned her ducal apartments into a museum containing the finest art treasures. This was not enough to satisfy Isabella, already in her mid-sixties, so she returned to political life and ruled Solarolo, in Romagna until her death on 13 February 1539. She was buried beside her husband in the Church of Santa Paola in Mantua, but the remains were stolen.", "title": "Early life" }, { "paragraph_id": 35, "text": "Isabella’s appearance was frequently written about in her lifetime. Mario Equicola said that “her eyes were black and sparkling, her hair yellow, and her complexion one of dazzling brilliancy.” Similarly Gian Giorgio Trissino’s I Ritratti has a fictionalized Pietro Bembo describe Isabella’s “rippling golden hair that flowed in thick masses over her shoulders,” in a passage that, according to art historian Sally Hickson, identifies Isabella as the “living paragon of female beauty.” The real Bembo praised Isabella’s “beautiful and charming hand and pure, sweet voice” in a letter addressed to her. The alleged beauty of Isabella attracted the attention of the king of France, Charles VIII, who asked the chaplain Bernardino of Urbino about her features and attempted to arrange a meeting with her. However, this meeting never took place as shortly after he returned to France.", "title": "Appearance" }, { "paragraph_id": 36, "text": "Isabella herself frequently diminished her own appearance; commenting on his portrait she told Francia that he had “made us far more beautiful by your art than nature ever made us.” Likewise she told Trissino that “your praises of us far exceed the truth”, and said of Titian’s portrait that “we doubt that at the age he represents us we were ever of the beauty it contains.” In 1534, in the same year that Titian’s portrait was painted, Titian’s friend, Pietro Aretino, mocked her appearance, calling her “the monstrous Marchioness of Mantua, with ebony teeth and ivory eyelashes, dishonestly ugly and ultra-dishonestly tarted up.” Despite her desplays of modesty, Isabella was also known to lose herself infront of a mirror.", "title": "Appearance" }, { "paragraph_id": 37, "text": "Isabella was worried about her weight from an early age. As an adult she discussed her weight with those close to her frequently. In 1499 she sent a portrait by Giovanni Santi to her brother Ludovico Sforza, complaining that it did not resemble her very much “for being a little fatter than me.” Ludovico replied that he liked the portrait very much of her and that it was very similar to her, although \"somewhat more fat\", unless Isabella had \"grown fatter after we saw her.\" In 1509 she complained to her husband that “if she had more to do with running the state she would not have grown fat”, while in 1511 her sister Lucrezia complained about an early draft of the Francia portrait that made her look too thin.", "title": "Appearance" }, { "paragraph_id": 38, "text": "Her face became damaged and prematurely aged by Venetian ceruse.", "title": "Appearance" }, { "paragraph_id": 39, "text": "During her lifetime and after her death, poets, popes, and statesmen paid tribute to Isabella. Pope Leo X invited her to treat him with \"as much friendliness as you would your brother\". The latter's secretary Pietro Bembo described her as \"one of the wisest and most fortunate of women\". The poet Ariosto deemed her the \"liberal and magnanimous Isabella\". Author Matteo Bandello wrote that she was \"supreme among women\", and the diplomat Niccolò da Correggio entitled her \"The First Lady of the world\".", "title": "Legacy" }, { "paragraph_id": 40, "text": "Judgments less imbued with praise, indeed very harsh, were instead expressed by Pope Julius II, a man of corrupt morals, in disagreement with Isabella's conduct, even went so far as to call her \"that ribald whore\". A not dissimilar judgment had also expressed her husband Francesco himself who, now a prisoner of the Venetians, accused his wife of not loving him and of having indeed been the cause of his ruin, referring to her by letter as \"that whore of my wife\".", "title": "Legacy" }, { "paragraph_id": 41, "text": "Isabella d'Este is famous as the most important art patron of the Renaissance; her life is documented by her correspondence, which remains archived in Mantua (approximately 28,000 letters received and copies of approximately 12,000 letters written).", "title": "Cultural pursuits" }, { "paragraph_id": 42, "text": "In painting she had numerous famous artists of the time work for her, including Giovanni Bellini, Giorgione, Leonardo da Vinci, Andrea Mantegna (court painter until 1506), Perugino, Raphael, Titian, Antonio da Correggio, Lorenzo Costa (court painter from 1509), Dosso Dossi, Francesco Francia, Giulio Romano, and many others. For instance her 'Studiolo' in the Ducal Palace, Mantua, was decorated with allegories by Mantegna, Perugino, Costa, and Correggio. It is suggested that she requested a specific painting for her studio from da Vinci that is known as La Scapigliata and may have given it to her son Federico II on the occasion of his wedding to Margaret Paleologa.", "title": "Cultural pursuits" }, { "paragraph_id": 43, "text": "In parallel she contracted the most important sculptors and medallists of her time, i.e. Michelangelo, Pier Jacopo Alari Bonacolsi (L'Antico), Gian Cristoforo Romano, and Tullio Lombardo. She also collected ancient Roman art.", "title": "Cultural pursuits" }, { "paragraph_id": 44, "text": "For what concerns writers, she was in contact with Pietro Aretino, Ludovico Ariosto, Pietro Bembo, Baldassare Castiglione, Mario Equicola, Gian Giorgio Trissino, and others.", "title": "Cultural pursuits" }, { "paragraph_id": 45, "text": "In music Isabella sponsored the composers Bartolomeo Tromboncino and Marco Cara and she played the lute. Unusually, she employed women as professional singers at her court, including Giovanna Moreschi, the wife of Marchetto Cara.", "title": "Cultural pursuits" }, { "paragraph_id": 46, "text": "In the architecture field, she could not afford new palaces, however she commissioned architects such as Biagio Rossetti and Battista Covo.", "title": "Cultural pursuits" }, { "paragraph_id": 47, "text": "She was also considered an icon of her time in fashion. Famous is her Balzo as headwear – documented as her invention in letters circa 1509 and visible several times in portraits of other ladies in the 1520s/30s.", "title": "Cultural pursuits" }, { "paragraph_id": 48, "text": "Despite her significant art patronage that included a number of portraits, there are very few surviving portraits that may be identified as Isabella, especially when compared to her sister Beatrice. It is known that the elderly Isabella preferred idealized paintings and even waived sitting as a model. However, it may be presumed that she insisted nonetheless on seeing her personal characteristics in the outcome. These few identifications are known as inhomogeneous (i.e. differing eye and hair colours as well as divergent eyebrows in two Titian portraits) and there are no known images of her between the ages of 26 and 54.", "title": "Portraits" }, { "paragraph_id": 49, "text": "In 1495 she refused with absolute rigor to pose for Mantegna in the Madonna della Vittoria – where her figure was provided next to that of her husband – since in the past the painter had portrayed her \"so badly done\" – in a painting that in fact has not survived – \"which has none of my similarities\". However, the negative judgment of the Marquise was not due to Mantegna's inability to portray her similar to the truth, as she herself writes, but to the opposite lack: of not knowing how to \"well counterfeit the natural\", that is idealize. Her husband Francesco had to pose alone and Mantegna remedied the disturbance of the symmetry by painting, in place of the Marquise, St. Elizabeth, his eponymous saint.", "title": "Portraits" }, { "paragraph_id": 50, "text": "In recent years several museums have withdrawn their few identifications of portraits as Isabella because of concern about possible misidentification.", "title": "Portraits" }, { "paragraph_id": 51, "text": "The remaining three colourful portraits are still inhomogeneous (Kunsthistorisches Museum/KHM, Vienna):", "title": "Portraits" }, { "paragraph_id": 52, "text": "La Bella (now in Palazzo Pitti, Florence) has been discussed as an alternative to Titian's 1536 portrait in Vienna, because the commission from the 60-year-old patron was for a rejuvenated portrait; if La Bella were Isabella, eye colour, hair colour, eyebrows, and general appearance would homogenize in all known portraits, allowing potential links toward further identifications.", "title": "Portraits" }, { "paragraph_id": 53, "text": "As of 2021, the 1495 medal by Gian Cristoforo Romano (several extant copies) is the only reliable identification because of the inscription created during Isabella's lifetime.", "title": "Portraits" }, { "paragraph_id": 54, "text": "Idealised portraits still show characteristics of the person. The following characteristics can be derived (characteristics of the disputed Isabella in Black are excluded):", "title": "Portraits" }, { "paragraph_id": 55, "text": "In the current catalogue raisonné of Leonardo da Vinci (2019), only Isabella d'Este is documented as a plausible alternative as the subject of Leonardo's Mona Lisa, usually considered a portrait of Lisa del Giocondo. Lisa was the wife of a merchant in Florence and Giorgio Vasari wrote of her portrait by Leonardo, – in debate that persists about whether this is the portrait now known as the Mona Lisa. Evidence in favor of Isabella as the subject of the famous work includes Leonardo's drawing 'Isabella d'Este' from 1499 and her letters of 1501–1506 requesting the promised painted portrait. Further arguments focus upon the mountains in the background indicating the native origin of the subject, and the armrest in the painting as a Renaissance symbol used to identify a portrait as that of a sovereign. The Louvre's reservation is that Isabella would be a \"blonde\", a feature that exists only in the widely circulated but uncertain representation Isabella in Black.", "title": "Portraits" }, { "paragraph_id": 56, "text": "Together Isabella and Francesco had eight children:", "title": "Children" }, { "paragraph_id": 57, "text": "Correspondence exchanged by Isabella documents the Renaissance European tendency to perceive black African slaves as exotic. Isabella's pursuit of a black child as a servant is extensively documented. On 1 May 1491 Isabella asked Giorgio Brognolo, her agent in Venice, to procure a young black girl ('una moreta') between the ages of one-and-a-half and four, and twice in early June reminded him of the request, emphasizing that the girl should be 'as black as possible'. Isabella's household and financial records reflect that she already had a significantly older black girl in her service when she inquired after a younger black child. Records also reflect that she obtained a little black girl from a Venetian orphanage. She opened negotiations with a Venetian patrician household for the sale of a little black boy and purchased an enslaved little black girl from her sister.", "title": "Household slaves" }, { "paragraph_id": 58, "text": "The artwork The Dinner Party by Judy Chicago features a place setting for Isabella d'Este.", "title": "Depiction in modern media" }, { "paragraph_id": 59, "text": "Isabella d'Este was portrayed by Belgian actress Alexandra Oppo in the television show Borgia (2011–2014).", "title": "Depiction in modern media" } ]
Isabella d'Este was Marchioness of Mantua and one of the leading women of the Italian Renaissance as a major cultural and political figure. She was a patron of the arts as well as a leader of fashion, whose innovative style of dressing was copied by numerous women. The poet Ariosto labeled her as the "liberal and magnanimous Isabella", while author Matteo Bandello described her as having been "supreme among women". Diplomat Niccolò da Correggio went even further by hailing her as "The First Lady of the world". She served as the regent of Mantua during the absence of her husband Francesco II Gonzaga and during the minority of her son Federico. She was a prolific letter-writer and maintained a lifelong correspondence with her sister-in-law Elisabetta Gonzaga. Isabella grew up in a cultured family in the city-state of Ferrara. She received a fine classical education and as a girl met many famous humanist scholars and artists. Due to the vast amount of extant correspondence between Isabella and her family and friends, her life is extremely well documented.
2001-12-16T21:44:23Z
2023-12-11T12:29:55Z
[ "Template:Webarchive", "Template:S-aft", "Template:S-end", "Template:Short description", "Template:Sfn", "Template:Citation needed", "Template:Cite web", "Template:Princesses of Modena", "Template:For", "Template:Cite encyclopedia", "Template:Self-published source", "Template:S-bef", "Template:S-ttl", "Template:Blockquote", "Template:Reflist", "Template:Cite journal", "Template:Commons", "Template:Cite book", "Template:Cite Grove", "Template:Citation", "Template:S-start", "Template:Use dmy dates", "Template:Infobox royalty", "Template:Main", "Template:Better source needed" ]
https://en.wikipedia.org/wiki/Isabella_d%27Este
15,402
International standard
An international standard is a technical standard developed by one or more international standards organizations. International standards are available for consideration and use worldwide. The most prominent such organization is the International Organization for Standardization (ISO). Other prominent international standards organizations including the International Telecommunication Union (ITU) and the International Electrotechnical Commission (IEC). Together, these three organizations have formed the World Standards Cooperation alliance. International standards may be used either by direct application or by a process of modifying an international standard to suit local conditions. Adopting international standards results in creating national standards that are equivalent, or substantially the same as international standards in technical content, but may have (i) editorial differences as to appearance, use of symbols and measurement units, substitution of a point for a comma as the decimal marker, and (ii) differences resulting from conflicts in government regulations or industry-specific requirements caused by fundamental climatic, geographic, technologic, or infrastructure factors, or the stringency of safety requirements that a given standard authority considers appropriate. International standards are one way to overcome technical barriers in international commerce caused by differences among technical regulations and standards developed independently and separately by each nation, national standards organization, or business. Technical barriers arise when different groups come together, each with a large user base, doing some well established thing that between them is mutually incompatible. Establishing international standards is one way of preventing or overcoming this problem. To support this, the World Trade Organization (WTO) Technical Barriers to Trade (TBT) Committee published the "Six Principles" guiding members in the development of international standards. The implementation of standards in industry and commerce became highly important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts. Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800, which allowed for the standardisation of screw thread sizes for the first time. Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards spread a bit within their industries. Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around the country in 1841. It came to be known as the British Standard Whitworth, and was widely adopted in other countries. By the end of the 19th century differences in standards between companies were making trade increasingly difficult and strained. The Engineering Standards Committee was established in London in 1901 as the world's first national standards body. After the First World War, similar national bodies were established in other countries. The Deutsches Institut für Normung was set up in Germany in 1917, followed by its counterparts, the American National Standard Institute and the French Commission Permanente de Standardisation, both in 1918. There are not many books that cover standards in general, but a book written in 2019 by Nicholas Rich and Tegwen Malik gives a very comprehensive overview of the history of standards, how ISO standards are drafted along with key ISO standards such as ISO 9001 and ISO 14001. A paper has been published explaining the differences between international standards and private standards. One of the most well established international standardization organizations is the International Telecommunication Union (ITU), a specialized agency of the United Nations which was founded on 17 May 1865 as the International Telegraph Union. The ITU was initially focused on the standardization of telegraph signals, and later evolved to include telephony, radio and satellite communications, and other information and communication technology. By the mid to late 19th century, efforts were being made to standardize electrical measurement. An important figure was R. E. B. Crompton, who became concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings for voltage, frequency, current and even the symbols used on circuit diagrams. Adjacent buildings would have totally incompatible electrical systems simply because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for an international standard for electric engineering. In 1904, Crompton represented Britain at the Louisiana Purchase Exposition in St. Louis as part of a delegation by the Institute of Electrical Engineers. He presented a paper on standardisation, which was so well received that he was asked to look into the formation of a commission to oversee the process. By 1906, his work was complete and he drew up a permanent constitution for the first international standards organization, the International Electrotechnical Commission (IEC). The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the body's first President. The International Federation of the National Standardizing Associations (ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 during World War II. After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the International Organization for Standardization (ISO); the organization officially began operations in February 1947. Global standards are also referred to as industry or private standards, which are designed and developed with the entire world in mind. Unlike international standards, these standards are not developed in international organizations or standards setting organizations (SSO) which follow a consensus process. Instead, these standards are developed by private sector entities, like NGOs and for-profit organizations, often without transparency, openness, or consensus considerations.
[ { "paragraph_id": 0, "text": "An international standard is a technical standard developed by one or more international standards organizations. International standards are available for consideration and use worldwide. The most prominent such organization is the International Organization for Standardization (ISO). Other prominent international standards organizations including the International Telecommunication Union (ITU) and the International Electrotechnical Commission (IEC). Together, these three organizations have formed the World Standards Cooperation alliance.", "title": "" }, { "paragraph_id": 1, "text": "International standards may be used either by direct application or by a process of modifying an international standard to suit local conditions. Adopting international standards results in creating national standards that are equivalent, or substantially the same as international standards in technical content, but may have (i) editorial differences as to appearance, use of symbols and measurement units, substitution of a point for a comma as the decimal marker, and (ii) differences resulting from conflicts in government regulations or industry-specific requirements caused by fundamental climatic, geographic, technologic, or infrastructure factors, or the stringency of safety requirements that a given standard authority considers appropriate.", "title": "Purpose" }, { "paragraph_id": 2, "text": "International standards are one way to overcome technical barriers in international commerce caused by differences among technical regulations and standards developed independently and separately by each nation, national standards organization, or business. Technical barriers arise when different groups come together, each with a large user base, doing some well established thing that between them is mutually incompatible. Establishing international standards is one way of preventing or overcoming this problem. To support this, the World Trade Organization (WTO) Technical Barriers to Trade (TBT) Committee published the \"Six Principles\" guiding members in the development of international standards.", "title": "Purpose" }, { "paragraph_id": 3, "text": "The implementation of standards in industry and commerce became highly important with the onset of the Industrial Revolution and the need for high-precision machine tools and interchangeable parts. Henry Maudslay developed the first industrially practical screw-cutting lathe in 1800, which allowed for the standardisation of screw thread sizes for the first time.", "title": "History" }, { "paragraph_id": 4, "text": "Maudslay's work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization; some companies' in-house standards spread a bit within their industries. Joseph Whitworth's screw thread measurements were adopted as the first (unofficial) national standard by companies around the country in 1841. It came to be known as the British Standard Whitworth, and was widely adopted in other countries.", "title": "History" }, { "paragraph_id": 5, "text": "By the end of the 19th century differences in standards between companies were making trade increasingly difficult and strained. The Engineering Standards Committee was established in London in 1901 as the world's first national standards body. After the First World War, similar national bodies were established in other countries. The Deutsches Institut für Normung was set up in Germany in 1917, followed by its counterparts, the American National Standard Institute and the French Commission Permanente de Standardisation, both in 1918.", "title": "History" }, { "paragraph_id": 6, "text": "There are not many books that cover standards in general, but a book written in 2019 by Nicholas Rich and Tegwen Malik gives a very comprehensive overview of the history of standards, how ISO standards are drafted along with key ISO standards such as ISO 9001 and ISO 14001. A paper has been published explaining the differences between international standards and private standards.", "title": "History" }, { "paragraph_id": 7, "text": "One of the most well established international standardization organizations is the International Telecommunication Union (ITU), a specialized agency of the United Nations which was founded on 17 May 1865 as the International Telegraph Union. The ITU was initially focused on the standardization of telegraph signals, and later evolved to include telephony, radio and satellite communications, and other information and communication technology.", "title": "History" }, { "paragraph_id": 8, "text": "By the mid to late 19th century, efforts were being made to standardize electrical measurement. An important figure was R. E. B. Crompton, who became concerned by the large range of different standards and systems used by electrical engineering companies and scientists in the early 20th century. Many companies had entered the market in the 1890s and all chose their own settings for voltage, frequency, current and even the symbols used on circuit diagrams. Adjacent buildings would have totally incompatible electrical systems simply because they had been fitted out by different companies. Crompton could see the lack of efficiency in this system and began to consider proposals for an international standard for electric engineering.", "title": "History" }, { "paragraph_id": 9, "text": "In 1904, Crompton represented Britain at the Louisiana Purchase Exposition in St. Louis as part of a delegation by the Institute of Electrical Engineers. He presented a paper on standardisation, which was so well received that he was asked to look into the formation of a commission to oversee the process. By 1906, his work was complete and he drew up a permanent constitution for the first international standards organization, the International Electrotechnical Commission (IEC). The body held its first meeting that year in London, with representatives from 14 countries. In honour of his contribution to electrical standardisation, Lord Kelvin was elected as the body's first President.", "title": "History" }, { "paragraph_id": 10, "text": "The International Federation of the National Standardizing Associations (ISA) was founded in 1926 with a broader remit to enhance international cooperation for all technical standards and specifications. The body was suspended in 1942 during World War II.", "title": "History" }, { "paragraph_id": 11, "text": "After the war, ISA was approached by the recently formed United Nations Standards Coordinating Committee (UNSCC) with a proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the International Organization for Standardization (ISO); the organization officially began operations in February 1947.", "title": "History" }, { "paragraph_id": 12, "text": "Global standards are also referred to as industry or private standards, which are designed and developed with the entire world in mind. Unlike international standards, these standards are not developed in international organizations or standards setting organizations (SSO) which follow a consensus process. Instead, these standards are developed by private sector entities, like NGOs and for-profit organizations, often without transparency, openness, or consensus considerations.", "title": "Global standards" } ]
An international standard is a technical standard developed by one or more international standards organizations. International standards are available for consideration and use worldwide. The most prominent such organization is the International Organization for Standardization (ISO). Other prominent international standards organizations including the International Telecommunication Union (ITU) and the International Electrotechnical Commission (IEC). Together, these three organizations have formed the World Standards Cooperation alliance.
2001-12-17T00:23:28Z
2023-12-29T19:18:17Z
[ "Template:Cite web", "Template:Citation", "Template:Webarchive", "Template:Commons category-inline", "Template:Authority control", "Template:Short description", "Template:Nowrap", "Template:Reflist", "Template:Cite book" ]
https://en.wikipedia.org/wiki/International_standard
15,403
ISO 4217
ISO 4217 is a standard published by the International Organization for Standardization (ISO) that defines alpha codes and numeric codes for the representation of currencies and provides information about the relationships between individual currencies and their minor units. This data is published in three tables: The first edition of ISO 4217 was published in 1978. The tables, history and ongoing discussion are maintained by SIX Group on behalf of ISO and the Swiss Association for Standardization. The ISO 4217 code list is used in banking and business globally. In many countries, the ISO 4217 alpha codes for the more common currencies are so well known publicly that exchange rates published in newspapers or posted in banks use only these to delineate the currencies, instead of translated currency names or ambiguous currency symbols. ISO 4217 alpha codes are used on airline tickets and international train tickets to remove any ambiguity about the price. In 1973, the ISO Technical Committee 68 decided to develop codes for the representation of currencies and funds for use in any application of trade, commerce or banking. At the 17th session (February 1978), the related UN/ECE Group of Experts agreed that the three-letter alphabetic codes for International Standard ISO 4217, "Codes for the representation of currencies and funds", would be suitable for use in international trade. Over time, new currencies are created and old currencies are discontinued. Such changes usually originate from the formation of new countries, treaties between countries on shared currencies or monetary unions, or redenomination from an existing currency due to excessive inflation. As a result, the list of codes must be updated from time to time. The ISO 4217 maintenance agency is responsible for maintaining the list of codes. In the case of national currencies, the first two letters of the alpha code are the two letters of the ISO 3166-1 alpha-2 country code and the third is usually the initial of the currency's main unit. So Japan's currency code is JPY: "JP" for Japan and "Y" for yen. This eliminates the problem caused by the names dollar, franc, peso and pound being used in dozens of countries, each having significantly differing values. While in most cases the ISO code resembles an abbreviation of the currency's full English name, this is not always the case, as currencies such as the Algerian dinar, Aruban florin, Cayman dollar, renminbi, sterling and the Swiss franc have been assigned codes which do not closely resemble abbreviations of the official currency names. In some cases, the third letter of the alpha code is not the initial letter of a currency unit name. There may be a number of reasons for this: In addition to codes for most active national currencies ISO 4217 provides codes for "supranational" currencies, procedural purposes, and several things which are "similar to" currencies: The use of an initial letter "X" for these purposes is facilitated by the ISO 3166 rule that no official country code beginning with X will ever be assigned. The inclusion of EU (denoting the European Union) in the ISO 3166-1 reserved codes list allows the euro to be coded as EUR rather than assigned a code beginning with X, even though it is a supranational currency. ISO 4217 also assigns a three-digit numeric code to each currency. This numeric code is usually the same as the numeric code assigned to the corresponding country by ISO 3166-1. For example, USD (United States dollar) has numeric code 840 which is also the ISO 3166-1 code for "US" (United States). The following is a list of active codes of official ISO 4217 currency names as of 1 April 2022. In the standard the values are called "alphabetic code", "numeric code", "minor unit", and "entity". According to UN/CEFACT recommendation 9, paragraphs 8–9 ECE/TRADE/203, 1996: A number of currencies had official ISO 4217 currency codes and currency names until their replacement by another currency. The table below shows the ISO currency codes of former currencies and their common names (which do not always match the ISO 4217 names). That table has been introduced end 1988 by ISO. The 2008 (7th) edition of ISO 4217 says the following about minor units of currency: Requirements sometimes arise for values to be expressed in terms of minor units of currency. When this occurs, it is necessary to know the decimal relationship that exists between the currency concerned and its minor unit. This information has therefore been included in this International Standard and is shown in the column headed "Minor unit" in Tables A.1 and A.2; "0" means that there is no minor unit for that currency, whereas "1", "2" and "3" signify a ratio of 10:1, 100:1 and 1000:1 respectively. The names of the minor units are not given. Examples for the ratios of 100:1 and 1000:1 include the United States dollar and the Bahraini dinar, for which the column headed "Minor unit" shows "2" and "3", respectively. As of 2021, two currencies have non-decimal ratios, the Mauritanian ouguiya and the Malagasy ariary; in both cases the ratio is 5:1. For these, the "Minor unit" column shows the number "2". Some currencies, such as the Burundian franc, do not in practice have any minor currency unit at all. These show the number "0", as with currencies whose minor units are unused due to negligible value. The ISO standard does not regulate either the spacing, prefixing or suffixing in usage of currency codes. The style guide of the European Union's Publication Office declares that, for texts issued by or through the Commission in English, Irish, Latvian and Maltese, the ISO 4217 code is to be followed by a "hard space" (non-breaking space) and the amount: and for texts in Bulgarian, Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Lithuanian, Polish, Portuguese, Romanian, Slovak, Slovene, Spanish and Swedish the order is reversed; the amount is followed by a non-breaking space and the ISO 4217 code: As illustrated, the order is determined not by the currency but by the native language of the document context. The US dollar has two codes assigned: USD and USN ("US dollar next day"). The USS (same day) code is not in use any longer, and was removed from the list of active ISO 4217 codes in March 2014. A number of active currencies do not have an ISO 4217 code, because they may be: See Category:Fixed exchange rate for a list of all currently pegged currencies. Despite having no presence or status in the standard, three letter acronyms that resemble ISO 4217 coding are sometimes used locally or commercially to represent de facto currencies or currency instruments. The following non-ISO codes were used in the past. Minor units of currency (also known as currency subdivisions or currency subunits) are often used for pricing and trading stocks and other assets, such as energy, but are not assigned codes by ISO 4217. Two conventions for representing minor units are in widespread use: A third convention is similar to the second one but uses an upper-case letter, e.g. ZAC for the South African Cent. Cryptocurrencies have not been assigned an ISO 4217 code. However, some cryptocurrencies and cryptocurrency exchanges use a three-letter acronym that resemble an ISO 4217 code.
[ { "paragraph_id": 0, "text": "ISO 4217 is a standard published by the International Organization for Standardization (ISO) that defines alpha codes and numeric codes for the representation of currencies and provides information about the relationships between individual currencies and their minor units. This data is published in three tables:", "title": "" }, { "paragraph_id": 1, "text": "The first edition of ISO 4217 was published in 1978. The tables, history and ongoing discussion are maintained by SIX Group on behalf of ISO and the Swiss Association for Standardization.", "title": "" }, { "paragraph_id": 2, "text": "The ISO 4217 code list is used in banking and business globally. In many countries, the ISO 4217 alpha codes for the more common currencies are so well known publicly that exchange rates published in newspapers or posted in banks use only these to delineate the currencies, instead of translated currency names or ambiguous currency symbols. ISO 4217 alpha codes are used on airline tickets and international train tickets to remove any ambiguity about the price.", "title": "" }, { "paragraph_id": 3, "text": "In 1973, the ISO Technical Committee 68 decided to develop codes for the representation of currencies and funds for use in any application of trade, commerce or banking. At the 17th session (February 1978), the related UN/ECE Group of Experts agreed that the three-letter alphabetic codes for International Standard ISO 4217, \"Codes for the representation of currencies and funds\", would be suitable for use in international trade.", "title": "History" }, { "paragraph_id": 4, "text": "Over time, new currencies are created and old currencies are discontinued. Such changes usually originate from the formation of new countries, treaties between countries on shared currencies or monetary unions, or redenomination from an existing currency due to excessive inflation. As a result, the list of codes must be updated from time to time. The ISO 4217 maintenance agency is responsible for maintaining the list of codes.", "title": "History" }, { "paragraph_id": 5, "text": "In the case of national currencies, the first two letters of the alpha code are the two letters of the ISO 3166-1 alpha-2 country code and the third is usually the initial of the currency's main unit. So Japan's currency code is JPY: \"JP\" for Japan and \"Y\" for yen. This eliminates the problem caused by the names dollar, franc, peso and pound being used in dozens of countries, each having significantly differing values. While in most cases the ISO code resembles an abbreviation of the currency's full English name, this is not always the case, as currencies such as the Algerian dinar, Aruban florin, Cayman dollar, renminbi, sterling and the Swiss franc have been assigned codes which do not closely resemble abbreviations of the official currency names.", "title": "Types of codes" }, { "paragraph_id": 6, "text": "In some cases, the third letter of the alpha code is not the initial letter of a currency unit name. There may be a number of reasons for this:", "title": "Types of codes" }, { "paragraph_id": 7, "text": "In addition to codes for most active national currencies ISO 4217 provides codes for \"supranational\" currencies, procedural purposes, and several things which are \"similar to\" currencies:", "title": "Types of codes" }, { "paragraph_id": 8, "text": "The use of an initial letter \"X\" for these purposes is facilitated by the ISO 3166 rule that no official country code beginning with X will ever be assigned.", "title": "Types of codes" }, { "paragraph_id": 9, "text": "The inclusion of EU (denoting the European Union) in the ISO 3166-1 reserved codes list allows the euro to be coded as EUR rather than assigned a code beginning with X, even though it is a supranational currency.", "title": "Types of codes" }, { "paragraph_id": 10, "text": "ISO 4217 also assigns a three-digit numeric code to each currency. This numeric code is usually the same as the numeric code assigned to the corresponding country by ISO 3166-1. For example, USD (United States dollar) has numeric code 840 which is also the ISO 3166-1 code for \"US\" (United States).", "title": "Types of codes" }, { "paragraph_id": 11, "text": "The following is a list of active codes of official ISO 4217 currency names as of 1 April 2022. In the standard the values are called \"alphabetic code\", \"numeric code\", \"minor unit\", and \"entity\".", "title": "List of ISO 4217 currency codes" }, { "paragraph_id": 12, "text": "According to UN/CEFACT recommendation 9, paragraphs 8–9 ECE/TRADE/203, 1996:", "title": "List of ISO 4217 currency codes" }, { "paragraph_id": 13, "text": "A number of currencies had official ISO 4217 currency codes and currency names until their replacement by another currency. The table below shows the ISO currency codes of former currencies and their common names (which do not always match the ISO 4217 names). That table has been introduced end 1988 by ISO.", "title": "List of ISO 4217 currency codes" }, { "paragraph_id": 14, "text": "The 2008 (7th) edition of ISO 4217 says the following about minor units of currency:", "title": "Currency details" }, { "paragraph_id": 15, "text": "Requirements sometimes arise for values to be expressed in terms of minor units of currency. When this occurs, it is necessary to know the decimal relationship that exists between the currency concerned and its minor unit. This information has therefore been included in this International Standard and is shown in the column headed \"Minor unit\" in Tables A.1 and A.2; \"0\" means that there is no minor unit for that currency, whereas \"1\", \"2\" and \"3\" signify a ratio of 10:1, 100:1 and 1000:1 respectively. The names of the minor units are not given.", "title": "Currency details" }, { "paragraph_id": 16, "text": "Examples for the ratios of 100:1 and 1000:1 include the United States dollar and the Bahraini dinar, for which the column headed \"Minor unit\" shows \"2\" and \"3\", respectively. As of 2021, two currencies have non-decimal ratios, the Mauritanian ouguiya and the Malagasy ariary; in both cases the ratio is 5:1. For these, the \"Minor unit\" column shows the number \"2\". Some currencies, such as the Burundian franc, do not in practice have any minor currency unit at all. These show the number \"0\", as with currencies whose minor units are unused due to negligible value.", "title": "Currency details" }, { "paragraph_id": 17, "text": "The ISO standard does not regulate either the spacing, prefixing or suffixing in usage of currency codes. The style guide of the European Union's Publication Office declares that, for texts issued by or through the Commission in English, Irish, Latvian and Maltese, the ISO 4217 code is to be followed by a \"hard space\" (non-breaking space) and the amount:", "title": "Currency details" }, { "paragraph_id": 18, "text": "and for texts in Bulgarian, Croatian, Czech, Danish, Dutch, Estonian, Finnish, French, German, Greek, Hungarian, Italian, Lithuanian, Polish, Portuguese, Romanian, Slovak, Slovene, Spanish and Swedish the order is reversed; the amount is followed by a non-breaking space and the ISO 4217 code:", "title": "Currency details" }, { "paragraph_id": 19, "text": "As illustrated, the order is determined not by the currency but by the native language of the document context.", "title": "Currency details" }, { "paragraph_id": 20, "text": "The US dollar has two codes assigned: USD and USN (\"US dollar next day\"). The USS (same day) code is not in use any longer, and was removed from the list of active ISO 4217 codes in March 2014.", "title": "Currency details" }, { "paragraph_id": 21, "text": "A number of active currencies do not have an ISO 4217 code, because they may be:", "title": "Non ISO 4217 currencies" }, { "paragraph_id": 22, "text": "See Category:Fixed exchange rate for a list of all currently pegged currencies.", "title": "Non ISO 4217 currencies" }, { "paragraph_id": 23, "text": "Despite having no presence or status in the standard, three letter acronyms that resemble ISO 4217 coding are sometimes used locally or commercially to represent de facto currencies or currency instruments.", "title": "Non ISO 4217 currencies" }, { "paragraph_id": 24, "text": "The following non-ISO codes were used in the past.", "title": "Non ISO 4217 currencies" }, { "paragraph_id": 25, "text": "Minor units of currency (also known as currency subdivisions or currency subunits) are often used for pricing and trading stocks and other assets, such as energy, but are not assigned codes by ISO 4217. Two conventions for representing minor units are in widespread use:", "title": "Non ISO 4217 currencies" }, { "paragraph_id": 26, "text": "A third convention is similar to the second one but uses an upper-case letter, e.g. ZAC for the South African Cent.", "title": "Non ISO 4217 currencies" }, { "paragraph_id": 27, "text": "Cryptocurrencies have not been assigned an ISO 4217 code. However, some cryptocurrencies and cryptocurrency exchanges use a three-letter acronym that resemble an ISO 4217 code.", "title": "Non ISO 4217 currencies" } ]
ISO 4217 is a standard published by the International Organization for Standardization (ISO) that defines alpha codes and numeric codes for the representation of currencies and provides information about the relationships between individual currencies and their minor units. This data is published in three tables: Table A.1 – Current currency & funds code list Table A.2 – Current funds codes Table A.3 – List of codes for historic denominations of currencies & funds The first edition of ISO 4217 was published in 1978. The tables, history and ongoing discussion are maintained by SIX Group on behalf of ISO and the Swiss Association for Standardization. The ISO 4217 code list is used in banking and business globally. In many countries, the ISO 4217 alpha codes for the more common currencies are so well known publicly that exchange rates published in newspapers or posted in banks use only these to delineate the currencies, instead of translated currency names or ambiguous currency symbols. ISO 4217 alpha codes are used on airline tickets and international train tickets to remove any ambiguity about the price.
2001-12-17T01:39:33Z
2023-12-24T00:41:01Z
[ "Template:Nowrap", "Template:Flag", "Template:Blockquote", "Template:N/a", "Template:Anchor", "Template:Use dmy dates", "Template:Numismatics", "Template:Portal", "Template:ISO 4217/cite", "Template:Cite book", "Template:ISO standards", "Template:Notelist", "Template:Cite web", "Template:ISO 4217/cite/SIX Group", "Template:Big", "Template:As of", "Template:Citation needed", "Template:See also", "Template:Annotated link", "Template:Cn", "Template:Geocoding-systems", "Template:Short description", "Template:Redirect-distinguish", "Template:Efn", "Template:Reflist", "Template:Mono", "Template:Val", "Template:Lang" ]
https://en.wikipedia.org/wiki/ISO_4217
15,406
Irgun
The Irgun (Hebrew: ארגון; full title: Hebrew: הארגון הצבאי הלאומי בארץ ישראל Hā-ʾIrgun Ha-Tzvaʾī Ha-Leūmī b-Ērētz Yiśrāʾel, lit. "The National Military Organization in the Land of Israel"), or Etzel (Hebrew: אצ"ל), was a Zionist paramilitary organization that operated in Mandate Palestine and then Israel between 1931 and 1948. It was an offshoot of the older and larger Jewish paramilitary organization Haganah (Hebrew: Hebrew: הגנה, Defence). The Irgun has been viewed as a terrorist organization or organization which carried out terrorist acts. The Irgun policy was based on what was then called Revisionist Zionism founded by Ze'ev Jabotinsky. According to Howard Sachar, "The policy of the new organization was based squarely on Jabotinsky's teachings: every Jew had the right to enter Palestine; only active retaliation would deter the Arabs; only Jewish armed force would ensure the Jewish state". Two of the operations for which the Irgun is best known are the bombing of the King David Hotel in Jerusalem on 22 July 1946 and the Deir Yassin massacre that killed at least 107 Palestinian Arab villagers, including women and children, carried out together with Lehi on 9 April 1948. The organization committed acts of terrorism against the British, whom it regarded as illegal occupiers, and against Arabs. In particular the Irgun was described as a terrorist organization by the United Nations, British, and United States governments; in media such as The New York Times newspaper; as well as by the Anglo-American Committee of Inquiry, the 1946 Zionist Congress and the Jewish Agency. However, academics such as Bruce Hoffman and Max Abrahms have written that the Irgun went to considerable lengths to avoid harming civilians, such as issuing pre-attack warnings; according to Hoffman, Irgun leadership urged "targeting the physical manifestations of British rule while avoiding the deliberate infliction of bloodshed." Albert Einstein, in a letter to The New York Times in 1948, compared Irgun and its successor Herut party to "Nazi and Fascist parties" and described it as a "terrorist, right wing, chauvinist organization". Irgun's tactics appealed to many Jews who believed that any action taken in the cause of the creation of a Jewish state was justified, including terrorism. Irgun members were absorbed into the Israel Defense Forces at the start of the 1948 Arab–Israeli war. The Irgun was a political predecessor to Israel's right-wing Herut (or "Freedom") party, which led to today's Likud party. Likud has led or been part of most Israeli governments since 1977. Members of the Irgun came mostly from Betar and from the Revisionist Party both in Palestine and abroad. The Revisionist Movement made up a popular backing for the underground organization. Ze'ev Jabotinsky, founder of Revisionist Zionism, commanded the organization until he died in 1940. He formulated the general realm of operation, regarding Restraint and the end thereof, and was the inspiration for the organization overall. An additional major source of ideological inspiration was the poetry of Uri Zvi Greenberg. The symbol of the organization, with the motto רק כך (only thus), underneath a hand holding a rifle in the foreground of a map showing both Mandatory Palestine and the Emirate of Transjordan (at the time, both were administered under the terms of the British Mandate for Palestine), implied that force was the only way to "liberate the homeland." The number of members of the Irgun varied from a few hundred to a few thousand. Most of its members were people who joined the organization's command, under which they carried out various operations and filled positions, largely in opposition to British law. Most of them were "ordinary" people, who held regular jobs, and only a few dozen worked full-time in the Irgun. The Irgun disagreed with the policy of the Yishuv and with the World Zionist Organization, both with regard to strategy and basic ideology and with regard to PR and military tactics, such as use of armed force to accomplish the Zionist ends, operations against the Arabs during the riots, and relations with the British mandatory government. Therefore, the Irgun tended to ignore the decisions made by the Zionist leadership and the Yishuv's institutions. This fact caused the elected bodies not to recognize the independent organization, and during most of the time of its existence the organization was seen as irresponsible, and its actions thus worthy of thwarting. Accordingly, the Irgun accompanied its armed operations with public-relations campaigns aiming to convince the public of the Irgun's way and the problems with the official political leadership of the Yishuv. The Irgun put out numerous advertisements, an underground newspaper and even ran the first independent Hebrew radio station – Kol Zion HaLochemet. As members of an underground armed organization, Irgun personnel did not normally call Irgun by its name but rather used other names. In the first years of its existence it was known primarily as Ha-Haganah Leumit (The National Defense), and also by names such as Haganah Bet ("Second Defense"), Irgun Bet ("Second Irgun"), the Parallel Organization and the Rightwing Organization. Later on it became most widely known as המעמד (the Stand). The anthem adopted by the Irgun was "Anonymous Soldiers", written by Avraham (Yair) Stern who was at the time a commander in the Irgun. Later on Stern defected from the Irgun and founded Lehi, and the song became the anthem of the Lehi. The Irgun's new anthem then became the third verse of the "Betar Song", by Ze'ev Jabotinsky. The Irgun gradually evolved from its humble origins into a serious and well-organized paramilitary organization. The movement developed a hierarchy of ranks and a sophisticated command-structure, and came to demand serious military training and strict discipline from its members. It developed clandestine networks of hidden arms-caches and weapons-production workshops, safe-houses, and training camps, along with a secret printing facility for propaganda posters. The ranks of the Irgun were (in ascending order): The Irgun was led by a High Command, which set policy and gave orders. Directly underneath it was a General Staff, which oversaw the activities of the Irgun. The General Staff was divided into a military and a support staff. The military staff was divided into operational units that oversaw operations and support units in charge of planning, instruction, weapons caches and manufacture, and first aid. The military and support staff never met jointly; they communicated through the High Command. Beneath the General Staff were six district commands: Jerusalem, Tel Aviv, Haifa-Galilee, Southern, Sharon, and Shomron, each led by a district commander. A local Irgun district unit was called a "Branch". A "brigade" in the Irgun was made up of three sections. A section was made up of two groups, at the head of each was a "Group Head", and a deputy. Eventually, various units were established, which answered to a "Center" or "Staff". The head of the Irgun High Command was the overall commander of the organization, but the designation of his rank varied. During the revolt against the British, Irgun commander Menachem Begin and the entire High Command held the rank of Gundar Rishon. His predecessors, however, had held their own ranks. A rank of Military Commander (Seren) was awarded to the Irgun commander Yaakov Meridor and a rank of High Commander (Aluf) to David Raziel. Until his death in 1940, Jabotinsky was known as the "Military Commander of the Etzel" or the Ha-Matzbi Ha-Elyon ("Supreme Commander"). Under the command of Menachem Begin, the Irgun was divided into different corps: The Irgun's commanders planned for it to have a regular combat force, a reserve, and shock units, but in practice there were not enough personnel for a reserve or for a shock force. The Irgun emphasized that its fighters be highly disciplined. Strict drill exercises were carried out at ceremonies at different times, and strict attention was given to discipline, formal ceremonies and military relationships between the various ranks. The Irgun put out professional publications on combat doctrine, weaponry, leadership, drill exercises, etc. Among these publications were three books written by David Raziel, who had studied military history, techniques, and strategy: A British analysis noted that the Irgun's discipline was "as strict as any army in the world." The Irgun operated a sophisticated recruitment and military-training regime. Those wishing to join had to find and make contact with a member, meaning only those who personally knew a member or were persistent could find their way in. Once contact had been established, a meeting was set up with the three-member selection committee at a safe-house, where the recruit was interviewed in a darkened room, with the committee either positioned behind a screen, or with a flashlight shone into the recruit's eyes. The interviewers asked basic biographical questions, and then asked a series of questions designed to weed out romantics and adventurers and those who had not seriously contemplated the potential sacrifices. Those selected attended a four-month series of indoctrination seminars in groups of five to ten, where they were taught the Irgun's ideology and the code of conduct it expected of its members. These seminars also had another purpose - to weed out the impatient and those of flawed purpose who had gotten past the selection interview. Then, members were introduced to other members, were taught the locations of safe-houses, and given military training. Irgun recruits trained with firearms, hand grenades, and were taught how to conduct combined attacks on targets. Arms handling and tactics courses were given in clandestine training camps, while practice shooting took place in the desert or by the sea. Eventually, separate training camps were established for heavy-weapons training. The most rigorous course was the explosives course for bomb-makers, which lasted a year. The British authorities believed that some Irgun members enlisted in the Jewish section of the Palestine Police Force for a year as part of their training, during which they also passed intelligence. In addition to the Irgun's sophisticated training program, many Irgun members were veterans of the Haganah (including the Palmach), the British Armed Forces, and Jewish partisan groups that had waged guerrilla warfare in Nazi-occupied Europe, thus bringing significant military training and combat experience into the organization. The Irgun also operated a course for its intelligence operatives, in which recruits were taught espionage, cryptography, and analysis techniques. Of the Irgun's members, almost all were part-time members. They were expected to maintain their civilian lives and jobs, dividing their time between their civilian lives and underground activities. There were never more than 40 full-time members, who were given a small expense stipend on which to live on. Upon joining, every member received an underground name. The Irgun's members were divided into cells, and worked with the members of their own cells. The identities of Irgun members in other cells were withheld. This ensured that an Irgun member taken prisoner could betray no more than a few comrades. In addition to the Irgun's members in Palestine, underground Irgun cells composed of local Jews were established in Europe following World War II. An Irgun cell was also established in Shanghai, home to many European-Jewish refugees. The Irgun also set up a Swiss bank account. Eli Tavin, the former head of Irgun intelligence, was appointed commander of the Irgun abroad. In November 1947, the Jewish insurgency came to an end as the UN approved of the partition of Palestine, and the British had announced their intention to withdraw the previous month. As the British left and the 1947-48 Civil War in Mandatory Palestine got underway, the Irgun came out of the underground and began to function more as a standing army rather an underground organization. It began openly recruiting, training, and raising funds, and established bases, including training facilities. It also introduced field communications and created a medical unit and supply service. Until World War II the group armed itself with weapons purchased in Europe, primarily Italy and Poland, and smuggled to Palestine. The Irgun also established workshops that manufactured spare parts and attachments for the weapons. Also manufactured were land mines and simple hand grenades. Another way in which the Irgun armed itself was theft of weapons from the British Police and military. The Irgun's first steps were in the aftermath of the Riots of 1929. In the Jerusalem branch of the Haganah there were feelings of disappointment and internal unrest towards the leadership of the movements and the Histadrut (at that time the organization running the Haganah). These feelings were a result of the view that the Haganah was not adequately defending Jewish interests in the region. Likewise, critics of the leadership spoke out against alleged failures in the number of weapons, readiness of the movement and its policy of restraint and not fighting back. On April 10, 1931, commanders and equipment managers announced that they refused to return weapons to the Haganah that had been issued to them earlier, prior to the Nebi Musa holiday. These weapons were later returned by the commander of the Jerusalem branch, Avraham Tehomi, a.k.a. "Gideon". However, the commanders who decided to rebel against the leadership of the Haganah relayed a message regarding their resignations to the Vaad Leumi, and thus this schism created a new independent movement. The leader of the new underground movement was Avraham Tehomi, alongside other founding members who were all senior commanders in the Haganah, members of Hapoel Hatzair and of the Histadrut. Also among them was Eliyahu Ben Horin, an activist in the Revisionist Party. This group was known as the "Odessan Gang", because they previously had been members of the Haganah Ha'Atzmit of Jewish Odessa. The new movement was named Irgun Tsvai Leumi, ("National Military Organization") in order to emphasize its active nature in contrast to the Haganah. Moreover, the organization was founded with the desire to become a true military organization and not just a militia as the Haganah was at the time. In the autumn of that year the Jerusalem group merged with other armed groups affiliated with Betar. The Betar groups' center of activity was in Tel Aviv, and they began their activity in 1928 with the establishment of "Officers and Instructors School of Betar". Students at this institution had broken away from the Haganah earlier, for political reasons, and the new group called itself the "National Defense", הגנה הלאומית. During the riots of 1929 Betar youth participated in the defense of Tel Aviv neighborhoods under the command of Yermiyahu Halperin, at the behest of the Tel Aviv city hall. After the riots the Tel Avivian group expanded, and was known as "The Right Wing Organization". After the Tel Aviv expansion another branch was established in Haifa. Towards the end of 1932 the Haganah branch of Safed also defected and joined the Irgun, as well as many members of the Maccabi sports association. At that time the movement's underground newsletter, Ha'Metsudah (the Fortress) also began publication, expressing the active trend of the movement. The Irgun also increased its numbers by expanding draft regiments of Betar – groups of volunteers, committed to two years of security and pioneer activities. These regiments were based in places that from which stemmed new Irgun strongholds in the many places, including the settlements of Yesod HaMa'ala, Mishmar HaYarden, Rosh Pina, Metula and Nahariya in the north; in the center – Hadera, Binyamina, Herzliya, Netanya and Kfar Saba, and south of there – Rishon LeZion, Rehovot and Ness Ziona. Later on regiments were also active in the Old City of Jerusalem ("the Kotel Brigades") among others. Primary training centers were based in Ramat Gan, Qastina (by Kiryat Mal'akhi of today) and other places. In 1933 there were some signs of unrest, seen by the incitement of the local Arab leadership to act against the authorities. The strong British response put down the disturbances quickly. During that time the Irgun operated in a similar manner to the Haganah and was a guarding organization. The two organizations cooperated in ways such as coordination of posts and even intelligence sharing. Within the Irgun, Tehomi was the first to serve as "Head of the Headquarters" or "Chief Commander". Alongside Tehomi served the senior commanders, or "Headquarters" of the movement. As the organization grew, it was divided into district commands. In August 1933 a "Supervisory Committee" for the Irgun was established, which included representatives from most of the Zionist political parties. The members of this committee were Meir Grossman (of the Hebrew State Party), Rabbi Meir Bar-Ilan (of the Mizrachi Party), either Immanuel Neumann or Yehoshua Supersky (of the General Zionists) and Ze'ev Jabotinsky or Eliyahu Ben Horin (of Hatzohar). In protest against, and with the aim of ending Jewish immigration to Palestine, the Great Arab Revolt of 1936–1939 broke out on April 19, 1936. The riots took the form of attacks by Arab rioters ambushing main roads, bombing of roads and settlements as well as property and agriculture vandalism. In the beginning, the Irgun and the Haganah generally maintained a policy of restraint, apart from a few instances. Some expressed resentment at this policy, leading up internal unrest in the two organizations. The Irgun tended to retaliate more often, and sometimes Irgun members patrolled areas beyond their positions in order to encounter attackers ahead of time. However, there were differences of opinion regarding what to do in the Haganah, as well. Due to the joining of many Betar Youth members, Jabotinsky (founder of Betar) had a great deal of influence over Irgun policy. Nevertheless, Jabotinsky was of the opinion that for moral reasons violent retaliation was not to be undertaken. In November 1936 the Peel Commission was sent to inquire regarding the breakout of the riots and propose a solution to end the Revolt. In early 1937 there were still some in the Yishuv who felt the commission would recommend a partition of Mandatory Palestine (the land west of the Jordan River), thus creating a Jewish state on part of the land. The Irgun leadership, as well as the "Supervisory Committee" held similar beliefs, as did some members of the Haganah and the Jewish Agency. This belief strengthened the policy of restraint and led to the position that there was no room for defense institutions in the future Jewish state. Tehomi was quoted as saying: "We stand before great events: a Jewish state and a Jewish army. There is a need for a single military force". This position intensified the differences of opinion regarding the policy of restraint, both within the Irgun and within the political camp aligned with the organization. The leadership committee of the Irgun supported a merger with the Haganah. On April 24, 1937, a referendum was held among Irgun members regarding its continued independent existence. David Raziel and Avraham (Yair) Stern came out publicly in support for the continued existence of the Irgun: In April 1937 the Irgun split after the referendum. Approximately 1,500–2,000 people, about half of the Irgun's membership, including the senior command staff, regional committee members, along with most of the Irgun's weapons, returned to the Haganah, which at that time was under the Jewish Agency's leadership. The Supervisory Committee's control over the Irgun ended, and Jabotinsky assumed command. In their opinion, the removal of the Haganah from the Jewish Agency's leadership to the national institutions necessitated their return. Furthermore, they no longer saw significant ideological differences between the movements. Those who remained in the Irgun were primarily young activists, mostly laypeople, who sided with the independent existence of the Irgun. In fact, most of those who remained were originally Betar people. Moshe Rosenberg estimated that approximately 1,800 members remained. In theory, the Irgun remained an organization not aligned with a political party, but in reality the supervisory committee was disbanded and the Irgun's continued ideological path was outlined according to Ze'ev Jabotinsky's school of thought and his decisions, until the movement eventually became Revisionist Zionism's military arm. One of the major changes in policy by Jabotinsky was the end of the policy of restraint. On April 27, 1937, the Irgun founded a new headquarters, staffed by Moshe Rosenberg at the head, Avraham (Yair) Stern as secretary, David Raziel as head of the Jerusalem branch, Hanoch Kalai as commander of Haifa and Aharon Haichman as commander of Tel Aviv. On 20 Tammuz, (June 29) the day of Theodor Herzl's death, a ceremony was held in honor of the reorganization of the underground movement. For security purposes this ceremony was held at a construction site in Tel Aviv. Ze'ev Jabotinsky placed Col. Robert Bitker at the head of the Irgun. Bitker had previously served as Betar commissioner in China and had military experience. A few months later, probably due to total incompatibility with the position, Jabotinsky replaced Bitker with Moshe Rosenberg. When the Peel Commission report was published a few months later, the Revisionist camp decided not to accept the commission's recommendations. Moreover, the organizations of Betar, Hatzohar and the Irgun began to increase their efforts to bring Jews to Eretz Israel (the Land of Israel), illegally. This Aliyah was known as the עליית אף על פי "Af Al Pi (Nevertheless) Aliyah". As opposed to this position, the Jewish Agency began acting on behalf of the Zionist interest on the political front, and continued the policy of restraint. From this point onwards the differences between the Haganah and the Irgun were much more obvious. According to Jabotinsky's "Evacuation Plan", which called for millions of European Jews to be brought to Palestine at once, the Irgun helped the illegal immigration of European Jews to Palestine. This was named by Jabotinsky the "National Sport". The most significant part of this immigration prior to World War II was carried out by the Revisionist camp, largely because the Yishuv institutions and the Jewish Agency shied away from such actions on grounds of cost and their belief that Britain would in the future allow widespread Jewish immigration. The Irgun joined forces with Hatzohar and Betar in September 1937, when it assisted with the landing of a convoy of 54 Betar members at Tantura Beach (near Haifa.) The Irgun was responsible for discreetly bringing the Olim, or Jewish immigrants, to the beaches, and dispersing them among the various Jewish settlements. The Irgun also began participating in the organisation of the immigration enterprise and undertook the process of accompanying the ships. This began with the ship Draga which arrived at the coast of British Palestine in September 1938. In August of the same year, an agreement was made between Ari Jabotinsky (the son of Ze'ev Jabotinsky), the Betar representative and Hillel Kook, the Irgun representative, to coordinate the immigration (also known as Ha'apala). This agreement was also made in the "Paris Convention" in February 1939, at which Ze'ev Jabotinsky and David Raziel were present. Afterwards, the "Aliyah Center" was founded, made up of representatives of Hatzohar, Betar, and the Irgun, thereby making the Irgun a full participant in the process. The difficult conditions on the ships demanded a high level of discipline. The people on board the ships were often split into units, led by commanders. In addition to having a daily roll call and the distribution of food and water (usually very little of either), organized talks were held to provide information regarding the actual arrival in Palestine. One of the largest ships was the Sakaria, with 2,300 passengers, which equalled about 0.5% of the Jewish population in Palestine. The first vessel arrived on April 13, 1937, and the last on February 13, 1940. All told, about 18,000 Jews immigrated to Palestine with the help of the Revisionist organizations and private initiatives by other Revisionists. Most were not caught by the British. While continuing to defend settlements, Irgun members began attacks on Arab villages around April 1936, thus ending the policy of restraint. These attacks were intended to instill fear in the Arab side, in order to cause the Arabs to wish for peace and quiet. In March 1938, David Raziel wrote in the underground newspaper "By the Sword" a constitutive article for the Irgun overall, in which he coined the term "Active Defense": By the end of World War II, more than 250 Arabs had been killed. Examples include: During 1936, Irgun members carried out approximately ten attacks. Throughout 1937 the Irgun continued this line of operation. A more complete list can be found here. At that time, however, these acts were not yet a part of a formulated policy of the Irgun. Not all of the aforementioned operations received a commander's approval, and Jabotinsky was not in favor of such actions at the time. Jabotinsky still hoped to establish a Jewish force out in the open that would not have to operate underground. However, the failure, in its eyes, of the Peel Commission and the renewal of violence on the part of the Arabs caused the Irgun to rethink its official policy. 14 November 1937 was a watershed in Irgun activity. From that date, the Irgun increased its reprisals. Following an increase in the number of attacks aimed at Jews, including the killing of five kibbutz members near Kiryat Anavim (today kibbutz Ma'ale HaHamisha), the Irgun undertook a series of attacks in various places in Jerusalem, killing five Arabs. Operations were also undertaken in Haifa (shooting at the Arab-populated Wadi Nisnas neighborhood) and in Herzliya. The date is known as the day the policy of restraint (Havlagah) ended, or as Black Sunday when operations resulted in the murder of 10 Arabs. This is when the organization fully changed its policy, with the approval of Jabotinsky and Headquarters to the policy of "active defense" in respect of Irgun actions. The British responded with the arrest of Betar and Hatzohar members as suspected members of the Irgun. Military courts were allowed to act under "Time of Emergency Regulations" and even sentence people to death. In this manner Yehezkel Altman, a guard in a Betar battalion in the Nahalat Yizchak neighborhood of Tel Aviv, shot at an Arab bus, without his commanders' knowledge. Altman was acting in response to a shooting at Jewish vehicles on the Tel Aviv–Jerusalem road the day before. He turned himself in later and was sentenced to death, a sentence which was later commuted to a life sentence. Despite the arrests, Irgun members continued fighting. Jabotinsky lent his moral support to these activities. In a letter to Moshe Rosenberg on 18 March 1938 he wrote: Although the Irgun continued activities such as these, following Rosenberg's orders, they were greatly curtailed. Furthermore, in fear of the British threat of the death sentence for anyone found carrying a weapon, all operations were suspended for eight months. However, opposition to this policy gradually increased. In April, 1938, responding to the killing of six Jews, Betar members from the Rosh Pina Brigade went on a reprisal mission, without the consent of their commander, as described by historian Avi Shlaim: Although the incident ended without casualties, the three were caught, and one of them – Shlomo Ben-Yosef was sentenced to death. Demonstrations around the country, as well as pressure from institutions and people such as Dr. Chaim Weizmann and the Chief Rabbi of Mandatory Palestine, Yitzhak HaLevi Herzog did not reduce his sentence. In Shlomo Ben-Yosef's writings in Hebrew were later found: On 29 June 1938 he was executed, and was the first of the Olei Hagardom. The Irgun revered him after his death and many regarded him as an example. In light of this, and due to the anger of the Irgun leadership over the decision to adopt a policy of restraint until that point, Jabotinsky relieved Rosenberg of his post and replaced him with David Raziel, who proved to be the most prominent Irgun commander until Menachem Begin. Jabotinsky simultaneously instructed the Irgun to end its policy of restraint, leading to armed offensive operations until the end of the Arab Revolt in 1939. In this time, the Irgun mounted about 40 operations against Arabs and Arab villages, for instance: This action led the British Parliament to discuss the disturbances in Palestine. On 23 February 1939 the Secretary of State for the Colonies, Malcolm MacDonald revealed the British intention to cancel the mandate and establish a state that would preserve Arab rights. This caused a wave of riots and attacks by Arabs against Jews. The Irgun responded four days later with a series of attacks on Arab buses and other sites. The British used military force against the Arab rioters and in the latter stages of the revolt by the Arab community in Palestine, it deteriorated into a series of internal gang wars. At the same time, the Irgun also established itself in Europe. The Irgun built underground cells that participated in organizing migration to Palestine. The cells were made up almost entirely of Betar members, and their primary activity was military training in preparation for emigration to Palestine. Ties formed with the Polish authorities brought about courses in which Irgun commanders were trained by Polish officers in advanced military issues such as guerrilla warfare, tactics and laying land mines. Avraham (Yair) Stern was notable among the cell organizers in Europe. In 1937 the Polish authorities began to deliver large amounts of weapons to the underground. According to Irgun activists Poland supplied the organization with 25,000 rifles, and additional material and weapons, by summer 1939 the Warsaw warehouses of Irgun held 5,000 rifles and 1,000 machine guns. The training and support by Poland would allow the organization to mobilize 30,000-40,000 men The transfer of handguns, rifles, explosives and ammunition stopped with the outbreak of World War II. Another field in which the Irgun operated was the training of pilots, so they could serve in the Air Force in the future war for independence, in the flight school in Lod. Towards the end of 1938 there was progress towards aligning the ideologies of the Irgun and the Haganah. Many abandoned the belief that the land would be divided and a Jewish state would soon exist. The Haganah founded פו"מ, a special operations unit, (pronounced poom), which carried out reprisal attacks following Arab violence. These operations continued into 1939. Furthermore, the opposition within the Yishuv to illegal immigration significantly decreased, and the Haganah began to bring Jews to Palestine using rented ships, as the Irgun had in the past. The publishing of the MacDonald White Paper of 1939 brought with it new edicts that were intended to lead to a more equitable settlement between Jews and Arabs. However, it was considered by some Jews to have an adverse effect on the continued development of the Jewish community in Palestine. Chief among these was the prohibition on selling land to Jews, and the smaller quotas for Jewish immigration. The entire Yishuv was furious at the contents of the White Paper. There were demonstrations against the "Treacherous Paper", as it was considered that it would preclude the establishment of a Jewish homeland in Palestine. Under the temporary command of Hanoch Kalai, the Irgun began sabotaging strategic infrastructure such as electricity facilities, radio and telephone lines. It also started publicizing its activity and its goals. This was done in street announcements, newspapers, as well as the underground radio station Kol Zion HaLochemet. On August 26, 1939, the Irgun killed Ralph Cairns, a British police officer who, as head of the Jewish Department in the Palestine Police, had been closing the net on Avraham Stern. Irgun had accused him of the torture of a number of its members. Cairns and Ronald Barker, another British police officer, were killed by a remotely detonated Irgun landmine. The British increased their efforts against the Irgun. As a result, on August 31 the British police arrested members meeting in the Irgun headquarters. On the next day, September 1, 1939, World War II broke out. Following the outbreak of war, Ze'ev Jabotinsky and the New Zionist Organization voiced their support for Britain and France. In mid-September 1939 Raziel was moved from his place of detention in Tzrifin. This, among other events, encouraged the Irgun to announce a cessation of its activities against the British so as not to hinder Britain's effort to fight "the Hebrew's greatest enemy in the world – German Nazism". This announcement ended with the hope that after the war a Hebrew state would be founded "within the historical borders of the liberated homeland". After this announcement Irgun, Betar and Hatzohar members, including Raziel and the Irgun leadership, were gradually released from detention. The Irgun did not rule out joining the British army and the Jewish Brigade. Irgun members did enlist in various British units. Irgun members also assisted British forces with intelligence in Romania, Bulgaria, Morocco and Tunisia. An Irgun unit also operated in Syria and Lebanon. David Raziel later died during one of these operations. During the Holocaust, Betar members revolted numerous times against the Nazis in occupied Europe. The largest of these revolts was the Warsaw Ghetto Uprising, in which an armed underground organization fought, formed by Betar and Hatzoar and known as the Żydowski Związek Wojskowy (ŻZW) (Jewish Military Union). Despite its political origins, the ŻZW accepted members without regard to political affiliation, and had contacts established before the war with elements of the Polish military. Because of differences over objectives and strategy, the ŻZW was unable to form a common front with the mainstream ghetto fighters of the Żydowska Organizacja Bojowa, and fought independently under the military leadership of Paweł Frenkiel and the political leadership of Dawid Wdowiński. There were instances of Betar members enlisted in the British military smuggling British weapons to the Irgun. From 1939 onwards, an Irgun delegation in the United States worked for the creation of a Jewish army made up of Jewish refugees and Jews from Palestine, to fight alongside the Allied Forces. In July 1943 the "Emergency Committee to Save the Jewish People in Europe" was formed, and worked until the end of the war to rescue the Jews of Europe from the Nazis and to garner public support for a Jewish state. However, it was not until January 1944 that US President Franklin Roosevelt established the War Refugee Board, which achieved some success in saving European Jews. Throughout this entire period, the British continued enforcing the White Paper's provisions, which included a ban on the sale of land, restrictions on Jewish immigration and increased vigilance against illegal immigration. Part of the reason why the British banned land sales (to anyone) was the confused state of the post Ottoman land registry; it was difficult to determine who actually owned the land that was for sale. Within the ranks of the Irgun this created much disappointment and unrest, at the center of which was disagreement with the leadership of the New Zionist Organization, David Raziel and the Irgun Headquarters. On June 18, 1939, Avraham (Yair) Stern and others of the leadership were released from prison and a rift opened between them the Irgun and Hatzohar leadership. The controversy centred on the issues of the underground movement submitting to public political leadership and fighting the British. On his release from prison Raziel resigned from Headquarters. To his chagrin, independent operations of senior members of the Irgun were carried out and some commanders even doubted Raziel's loyalty. In his place, Stern was elected to the leadership. In the past, Stern had founded secret Irgun cells in Poland without Jabotinsky's knowledge, in opposition to his wishes. Furthermore, Stern was in favor of removing the Irgun from the authority of the New Zionist Organization, whose leadership urged Raziel to return to the command of the Irgun. He finally consented. Jabotinsky wrote to Raziel and to Stern, and these letters were distributed to the branches of the Irgun: Stern was sent a telegram with an order to obey Raziel, who was reappointed. However, these events did not prevent the splitting of the organization. Suspicion and distrust were rampant among the members. Out of the Irgun a new organization was created on July 17, 1940, which was first named "The National Military Organization in Israel" (as opposed to the "National Military Organization in the Land of Israel") and later on changed its name to Lehi, an acronym for Lohamei Herut Israel, "Fighters for the Freedom of Israel", (לח"י – לוחמי חירות ישראל). Jabotinsky died in New York on August 4, 1940, yet this did not prevent the Lehi split. Following Jabotinsky's death, ties were formed between the Irgun and the New Zionist Organization. These ties would last until 1944, when the Irgun declared a revolt against the British. The primary difference between the Irgun and the newly formed organization was its intention to fight the British in Palestine, regardless of their war against Germany. Later, additional operational and ideological differences developed that contradicted some of the Irgun's guiding principles. For example, the Lehi, unlike the Irgun, supported a population exchange with local Arabs. The split damaged the Irgun both organizationally and from a morale point of view. As their spiritual leader, Jabotinsky's death also added to this feeling. Together, these factors brought about a mass abandonment by members. The British took advantage of this weakness to gather intelligence and arrest Irgun activists. The new Irgun leadership, which included Meridor, Yerachmiel Ha'Levi, Rabbi Moshe Zvi Segal and others used the forced hiatus in activity to rebuild the injured organization. This period was also marked by more cooperation between the Irgun and the Jewish Agency, however David Ben-Gurion's uncompromising demand that Irgun accept the Agency's command foiled any further cooperation. In both the Irgun and the Haganah more voices were being heard opposing any cooperation with the British. Nevertheless, an Irgun operation carried out in the service of Britain was aimed at sabotaging pro-Nazi forces in Iraq, including the assassination of Haj Amin al-Husayni. Among others, Raziel and Yaakov Meridor participated. On April 20, 1941, during a Luftwaffe air raid on RAF Habbaniya near Baghdad, David Raziel, commander of the Irgun, was killed during the operation. In late 1943 a joint Haganah – Irgun initiative was developed, to form a single fighting body, unaligned with any political party, by the name of עם לוחם (Fighting Nation). The new body's first plan was to kidnap the British High Commissioner of Palestine, Sir Harold MacMichael and take him to Cyprus. However, the Haganah leaked the planned operation and it was thwarted before it got off the ground. Nevertheless, at this stage the Irgun ceased its cooperation with the British. As Eliyahu Lankin tells in his book: In 1943 the Polish II Corps, commanded by Władysław Anders, arrived in Palestine from Iraq. The British insisted that no Jewish units of the army be created. Eventually, many of the soldiers of Jewish origin that arrived with the army were released and allowed to stay in Palestine. One of them was Menachem Begin, whose arrival in Palestine created new-found expectations within the Irgun and Betar. Begin had served as head of the Betar movement in Poland, and was a respected leader. Yaakov Meridor, then the commander of the Irgun, raised the idea of appointing Begin to the post. In late 1943, when Begin accepted the position, a new leadership was formed. Meridor became Begin's deputy, and other members of the board were Aryeh Ben Eliezer, Eliyahu Lankin, and Shlomo Lev Ami. On February 1, 1944, the Irgun put up posters all around the country, proclaiming a revolt against the British mandatory government. The posters began by saying that all of the Zionist movements stood by the Allied Forces and over 25,000 Jews had enlisted in the British military. The hope to establish a Jewish army had died. European Jewry was trapped and was being destroyed, yet Britain, for its part, did not allow any rescue missions. This part of the document ends with the following words: The Irgun then declared that, for its part, the ceasefire was over and they were now at war with the British. It demanded the transfer of rule to a Jewish government, to implement ten policies. Among these were the mass evacuation of Jews from Europe, the signing of treaties with any state that recognized the Jewish state's sovereignty, including Britain, granting social justice to the state's residents, and full equality to the Arab population. The proclamation ended with: The Irgun began this campaign rather weakly. At the time of the start of the revolt, it was only about 1,000 strong, including some 200 fighters. It possessed about 4 submachine guns, 40 rifles, 60 pistols, 150 hand grenades, and 2,000 kilograms of explosive material, and its funds were about £800. The Irgun began a militant operation against the symbols of government, in an attempt to harm the regime's operation as well as its reputation. The first attack was on February 12, 1944, at the government immigration offices, a symbol of the immigration laws. The attacks went smoothly and ended with no casualties—as they took place on a Saturday night, when the buildings were empty—in the three largest cities: Jerusalem, Tel Aviv, and Haifa. On February 27 the income tax offices were bombed. Parts of the same cities were blown up, also on a Saturday night; prior warnings were put up near the buildings. On March 23 the national headquarters building of the British police in the Russian Compound in Jerusalem was attacked, and part of it was blown up. These attacks in the first few months were sharply condemned by the organized leadership of the Yishuv and by the Jewish Agency, who saw them as dangerous provocations. At the same time the Lehi also renewed its attacks against the British. The Irgun continued to attack police stations and headquarters, and Tegart Fort, a fortified police station (today the location of Latrun). One relatively complex operation was the takeover of the radio station in Ramallah, on May 17, 1944. One symbolic act by the Irgun happened before Yom Kippur of 1944. They plastered notices around town, warning that no British officers should come to the Western Wall on Yom Kippur, and for the first time since the mandate began no British police officers were there to prevent the Jews from the traditional Shofar blowing at the end of the fast. After the fast that year the Irgun attacked four police stations in Arab settlements. In order to obtain weapons, the Irgun carried out "confiscation" operations – they robbed British armouries and smuggled stolen weapons to their own hiding places. During this phase of activity the Irgun also cut all of its official ties with the New Zionist Organization, so as not to tie their fate in the underground organization. Begin wrote in his memoirs, The Revolt: In October 1944 the British began expelling hundreds of arrested Irgun and Lehi members to detention camps in Africa. 251 detainees from Latrun were flown on thirteen planes, on October 19 to a camp in Asmara, Eritrea. Eleven additional transports were made. Throughout the period of their detention, the detainees often initiated rebellions and hunger strikes. Many escape attempts were made until July 1948 when the exiles were returned to Israel. While there were numerous successful escapes from the camp itself, only nine men actually made it back all the way. One noted success was that of Yaakov Meridor, who escaped nine times before finally reaching Europe in April 1948. These tribulations were the subject of his book Long is the Path to Freedom: Chronicles of one of the Exiles. On November 6, 1944, Lord Moyne, British Deputy Resident Minister of State in Cairo was assassinated by Lehi members Eliyahu Hakim and Eliyahu Bet-Zuri. This act raised concerns within the Yishuv from the British regime's reaction to the underground's violent acts against them. Therefore, the Jewish Agency decided on starting a Hunting Season, known as the saison, (from the French "la saison de chasse"). The Irgun's recuperation was noticeable when it began to renew its cooperation with the Lehi in May 1945, when it sabotaged oil pipelines, telephone lines and railroad bridges. All in all, over 1,000 members of the Irgun and Lehi were arrested and interned in British camps during the Saison. Eventually the Hunting Season died out, and there was even talk of cooperation with the Haganah leading to the formation of the Jewish Resistance Movement. Towards the end of July 1945 the Labour party in Britain was elected to power. The Yishuv leadership had high hopes that this would change the anti-Zionist policy that the British maintained at the time. However, these hopes were quickly dashed when the government limited Jewish immigration, with the intention that the population of Mandatory Palestine (the land west of the Jordan River) would not be more than one-third of the total. This, along with the stepping up of arrests and their pursuit of underground members and illegal immigration organizers led to the formation of the Jewish Resistance Movement. This body consolidated the armed resistance to the British of the Irgun, Lehi, and Haganah. For ten months the Irgun and the Lehi cooperated and they carried out nineteen attacks and defense operations. The Haganah and Palmach carried out ten such operations. The Haganah also assisted in landing 13,000 illegal immigrants. Tension between the underground movements and the British increased with the increase in operations. On April 23, 1946, an operation undertaken by the Irgun to gain weapons from the Tegart fort at Ramat Gan resulted in a firefight with the police in which an Arab constable and two Irgun fighters were killed, including one who jumped on an explosive device to save his comrades. A third fighter, Dov Gruner, was wounded and captured. He stood trial and was sentenced to be death by hanging, refusing to sign a pardon request. In 1946, British relations with the Yishuv worsened, building up to Operation Agatha of June 29. The authorities ignored the Anglo-American Committee of Inquiry's recommendation to allow 100,000 Jews into Palestine at once. As a result of the discovery of documents tying the Jewish Agency to the Jewish Resistance Movement, the Irgun was asked to speed up the plans for the King David Hotel bombing of July 22. The hotel was where the documents were located, the base for the British Secretariat, the military command and a branch of the Criminal Investigation Division of the police. The Irgun later claimed to have sent a warning that was ignored. Palestinian and U.S. sources confirm that the Irgun issued numerous warnings for civilians to evacuate the hotel prior to the bombing. 91 people were killed in the attack where a 350 kg bomb was placed in the basement of the hotel and caused a large section of it to collapse. Only 13 were British soldiers. The King David Hotel bombing and the arrest of Jewish Agency and other Yishuv leaders as part of Operation Agatha caused the Haganah to cease their armed activity against the British. Yishuv and Jewish Agency leaders were released from prison. From then until the end of the British mandate, resistance activities were led by the Irgun and Lehi. In early September 1946 the Irgun renewed its attacks against civil structures, railroads, communication lines and bridges. One operation was the attack on the train station in Jerusalem, in which Meir Feinstein was arrested and later committed suicide awaiting execution. According to the Irgun these sort of armed attacks were legitimate, since the trains primarily served the British, for redeployment of their forces. The Irgun also publicized leaflets, in three languages, not to use specific trains in danger of being attacked. For a while, the British stopped train traffic at night. The Irgun also carried out repeated attacks against military and police traffic using disguised, electronically detonated roadside mines which could be detonated by an operator hiding nearby as a vehicle passed, carried out arms raids against military bases and police stations (often disguised as British soldiers), launched bombing, shooting, and mortar attacks against military and police installations and checkpoints, and robbed banks to gain funds as a result of losing access to Haganah funding following the collapse of the Jewish Resistance Movement. On October 31, 1946, in response to the British barring entry of Jews from Palestine, the Irgun blew up the British Embassy in Rome, a center of British efforts to monitor and stop Jewish immigration. The Irgun also carried out a few other operations in Europe: a British troop train was derailed and an attempt against another troop train failed. An attack on a British officers club in Vienna took place in 1947, and an attack on another British officer's club in Vienna and a sergeant's club in Germany took place in 1948. In December 1946 a sentence of 18 years and 18 beatings was handed down to a young Irgun member for robbing a bank. The Irgun made good on a threat they made and after the detainee was whipped, Irgun members kidnapped British officers and beat them in public. The operation, known as the "Night of the Beatings" brought an end to British punitive beatings. The British, taking these acts seriously, moved many British families in Palestine into the confines of military bases, and some moved home. On February 14, 1947, Ernest Bevin announced that the Jews and Arabs would not be able to agree on any British proposed solution for the land, and therefore the issue must be brought to the United Nations (UN) for a final decision. The Yishuv thought of the idea to transfer the issue to the UN as a British attempt to achieve delay while a UN inquiry commission would be established, and its ideas discussed, and all the while the Yishuv would weaken. Foundation for Immigration B increased the number of ships bringing in Jewish refugees. The British still strictly enforced the policy of limited Jewish immigration and illegal immigrants were placed in detention camps in Cyprus, which increased the anger of the Jewish community towards the mandate government. The Irgun stepped up its activity and from February 19 until March 3 it attacked 18 British military camps, convoy routes, vehicles, and other facilities. The most notable of these attacks was the bombing of a British officer's club located in Goldsmith House in Jerusalem, which was in a heavily guarded security zone. Covered by machine-gun fire, an Irgun assault team in a truck penetrated the security zone and lobbed explosives into the building. Thirteen people, including two officers, were killed. As a result, martial law was imposed over much of the country, enforced by approximately 20,000 British soldiers. Despite this, attacks continued throughout the martial law period. The most notable one was an Irgun attack against the Royal Army Pay Corps base at the Schneller Orphanage, in which a British soldier was killed. Throughout its struggle against the British, the Irgun sought to publicize its cause around the world. By humiliating the British, it attempted to focus global attention on Palestine, hoping that any British overreaction would be widely reported, and thus result in more political pressure against the British. Begin described this strategy as turning Palestine into a "glass house". The Irgun also re-established many representative offices internationally, and by 1948 operated in 23 states. In these countries, the Irgun sometimes acted against the local British representatives or led public relations campaigns against Britain. According to Bruce Hoffman: "In an era long before the advent of 24/7 global news coverage and instantaneous satellite-transmitted broadcasts, the Irgun deliberately attempted to appeal to a worldwide audience far beyond the immediate confines of its local struggle, and beyond even the ruling regime's own homeland." On April 16, 1947, Irgun members Dov Gruner, Yehiel Dresner, Eliezer Kashani, and Mordechai Alkahi were hanged in Acre Prison, while singing Hatikvah. On April 21 Meir Feinstein and Lehi member Moshe Barazani blew themselves up, using a smuggled grenade, hours before their scheduled hanging. And on May 4 one of the Irgun's largest operations took place – the raid on Acre Prison. The operation was carried out by 23 men, commanded by Dov Cohen – AKA "Shimshon", along with the help of the Irgun and Lehi prisoners inside the prison. The Irgun had informed them of the plan in advance and smuggled in explosives. After a hole was blasted in the prison wall, the 41 Irgun and Lehi members who had been chosen to escape then ran to the hole, blasting through inner prison gates with the smuggled explosives. Meanwhile, Irgun teams mined roads and launched a mortar attack on a nearby British Army camp to delay the arrival of responding British forces. Although the 41 escapees managed to get out of the prison and board the escape trucks, some were rapidly recaptured and nine of the escapees and attackers were killed. Five Irgun men in the attacking party were also captured. Overall, 27 of the 41 designated escapees managed to escape. Along with the underground movement members, other criminals – including 214 Arabs – also escaped. Of the five attackers who were caught, three of them – Avshalom Haviv, Meir Nakar, and Yaakov Weiss, were sentenced to death. After the death sentences of the three were confirmed, the Irgun tried to save them by kidnapping hostages — British sergeants Clifford Martin and Mervyn Paice — in the streets of Netanya. British forces closed off and combed the area in search of the two, but did not find them. On July 29, 1947, in the afternoon, Meir Nakar, Avshalom Haviv, and Yaakov Weiss were executed. Approximately thirteen hours later the hostages were hanged in retaliation by the Irgun and their bodies, booby-trapped with an explosive, afterwards strung up from trees in woodlands south of Netanya. This action caused an outcry in Britain and was condemned both there and by Jewish leaders in Palestine. This episode has been given as a major influence on the British decision to terminate the Mandate and leave Palestine. The United Nations Special Committee on Palestine (UNSCOP) was also influenced by this and other actions. At the same time another incident was developing – the events of the ship Exodus 1947. The 4,500 Holocaust survivors on board were not allowed to enter Palestine. UNSCOP also covered the events. Some of its members were even present at Haifa port when the putative immigrants were forcefully removed from their ship (later found to have been rigged with an IED by some of its passengers) onto the deportation ships, and later commented that this strong image helped them press for an immediate solution for Jewish immigration and the question of Palestine. Two weeks later, the House of Commons convened for a special debate on events in Palestine, and concluded that their soldiers should be withdrawn as soon as possible. UNSCOP's conclusion was a unanimous decision to end the British mandate, and a majority decision to divide Mandatory Palestine (the land west of the Jordan River) between a Jewish state and an Arab state. During the UN's deliberations regarding the committee's recommendations the Irgun avoided initiating any attacks, so as not to influence the UN negatively on the idea of a Jewish state. On November 29 the UN General Assembly voted in favor of ending the mandate and establishing two states on the land. That very same day the Irgun and the Lehi renewed their attacks on British targets. The next day the local Arabs began attacking the Jewish community, thus beginning the first stage of the 1948 Palestine War. The first attacks on Jews were in Jewish neighborhoods of Jerusalem, in and around Jaffa, and in Bat Yam, Holon, and the Ha'Tikvah neighborhood in Tel Aviv. In the autumn of 1947, the Irgun had approximately 4,000 members. The goal of the organization at that point was the conquest of the land between the Jordan River and the Mediterranean Sea for the future Jewish state and preventing Arab forces from driving out the Jewish community. The Irgun became almost an overt organization, establishing military bases in Ramat Gan and Petah Tikva. It began recruiting openly, thus significantly increasing in size. During the war the Irgun fought alongside the Lehi and the Haganah in the front against the Arab attacks. At first the Haganah maintained a defensive policy, as it had until then, but after the Convoy of 35 incident it completely abandoned its policy of restraint: "Distinguishing between individuals is no longer possible, for now – it is a war, and even the innocent shall not be absolved." The Irgun also began carrying out reprisal missions, as it had under David Raziel's command. At the same time though, it published announcements calling on the Arabs to lay down their weapons and maintain a ceasefire: However, the mutual attacks continued. The Irgun attacked the Arab villages of Tira near Haifa, Yehudiya ('Abassiya) in the center, and Shuafat by Jerusalem. The Irgun also attacked in the Wadi Rushmiya neighborhood in Haifa and Abu Kabir in Jaffa. On December 29 Irgun units arrived by boat to the Jaffa shore and a gunfight between them and Arab gangs ensued. The following day a bomb was thrown from a speeding Irgun car at a group of Arab men waiting to be hired for the day at the Haifa oil refinery, resulting in seven Arabs killed, and dozens injured. In response, some Arab workers attacked Jews in the area, killing 41. This sparked a Haganah response in Balad al-Sheykh, which resulted in the deaths of 60 civilians. The Irgun's goal in the fighting was to move the battles from Jewish populated areas to Arab populated areas. On January 1, 1948, the Irgun attacked again in Jaffa, its men wearing British uniforms; later in the month it attacked in Beit Nabala, a base for many Arab fighters. On 5 January 1948 the Irgun detonated a lorry bomb outside Jaffa's Ottoman built Town Hall, killing 14 and injuring 19. In Jerusalem, two days later, Irgun members in a stolen police van rolled a barrel bomb into a large group of civilians who were waiting for a bus by the Jaffa Gate, killing around sixteen. In the pursuit that followed three of the attackers were killed and two taken prisoner. On 6 April 1948, the Irgun raided the British Army camp at Pardes Hanna killing six British soldiers and their commanding officer. The Deir Yassin massacre was carried out in a village west of Jerusalem that had signed a non-belligerency pact with its Jewish neighbors and the Haganah, and repeatedly had barred entry to foreign irregulars. On 9 April approximately 120 Irgun and Lehi members began an operation to capture the village. During the operation, the villagers fiercely resisted the attack, and a battle broke out. In the end, the Irgun and Lehi forces advanced gradually through house-to-house fighting. The village was only taken after the Irgun began systematically dynamiting houses, and after a Palmach unit intervened and employed mortar fire to silence the villagers' sniper positions. The operation resulted in five Jewish fighters dead and 40 injured. Some 100 to 120 villagers were also killed. There are allegations that Irgun and Lehi forces committed war crimes during and after the capture of the village. These allegations include reports that fleeing individuals and families were fired at, and prisoners of war were killed after their capture. A Haganah report writes: Some say that this incident was an event that accelerated the Arab exodus from Palestine. The Irgun cooperated with the Haganah in the conquest of Haifa. At the regional commander's request, on April 21 the Irgun took over an Arab post above Hadar Ha'Carmel as well as the Arab neighborhood of Wadi Nisnas, adjacent to the Lower City. The Irgun acted independently in the conquest of Jaffa (part of the proposed Arab State according to the UN Partition Plan). On April 25 Irgun units, about 600 strong, left the Irgun base in Ramat Gan towards Arab Jaffa. Difficult battles ensued, and the Irgun faced resistance from the Arabs as well as the British. Under the command of Amichai "Gidi" Paglin, the Irgun's chief operations officer, the Irgun captured the neighborhood of Manshiya, which threatened the city of Tel Aviv. Afterwards the force continued to the sea, towards the area of the port, and using mortars, shelled the southern neighborhoods. In his report concerning the fall of Jaffa the local Arab military commander, Michel Issa, wrote: "Continuous shelling with mortars of the city by Jews for four days, beginning 25 April, [...] caused inhabitants of city, unaccustomed to such bombardment, to panic and flee." According to Morris the shelling was done by the Irgun. Their objective was "to prevent constant military traffic in the city, to break the spirit of the enemy troops [and] to cause chaos among the civilian population in order to create a mass flight." High Commissioner Cunningham wrote a few days later "It should be made clear that IZL attack with mortars was indiscriminate and designed to create panic among the civilian inhabitants." The British demanded the evacuation of the newly conquered city, and militarily intervened, ending the Irgun offensive. Heavy British shelling against Irgun positions in Jaffa failed to dislodge them, and when British armor pushed into the city, the Irgun resisted; a bazooka team managed to knock out one tank, buildings were blown up and collapsed onto the streets as the armor advanced, and Irgun men crawled up and tossed live dynamite sticks onto the tanks. The British withdrew, and opened negotiations with the Jewish authorities. An agreement was worked out, under which Operation Hametz would be stopped and the Haganah would not attack Jaffa until the end of the Mandate. The Irgun would evacuate Manshiya, with Haganah fighters replacing them. British troops would patrol its southern end and occupy the police fort. The Irgun had previously agreed with the Haganah that British pressure would not lead to withdrawal from Jaffa and that custody of captured areas would be turned over to the Haganah. The city ultimately fell on May 13 after Haganah forces entered the city and took control of the rest of the city, from the south – part of the Hametz Operation which included the conquest of a number of villages in the area. The battles in Jaffa were a great victory for the Irgun. This operation was the largest in the history of the organization, which took place in a highly built up area that had many militants in shooting positions. During the battles explosives were used in order to break into homes and continue forging a way through them. Furthermore, this was the first occasion in which the Irgun had directly fought British forces, reinforced with armor and heavy weaponry. The city began these battles with an Arab population estimated at 70,000, which shrank to some 4,100 Arab residents by the end of major hostilities. Since the Irgun captured the neighborhood of Manshiya on its own, causing the flight of many of Jaffa's residents, the Irgun took credit for the conquest of Jaffa. It had lost 42 dead and about 400 wounded during the battle. On May 14, 1948 the establishment of the State of Israel was proclaimed. The declaration of independence was followed by the establishment of the Israel Defense Forces (IDF), and the process of absorbing all military organizations into the IDF started. On June 1, an agreement had been signed between Menachem Begin and Yisrael Galili for the absorption of the Irgun into the IDF. One of the clauses stated that the Irgun had to stop smuggling arms. Meanwhile, in France, Irgun representatives purchased a ship, renamed Altalena (a pseudonym of Ze'ev Jabotinsky), and weapons. The ship sailed on June 11 and arrived at the Israeli coast on June 20, during the first truce of the 1948 Arab–Israeli War. Despite United Nations Security Council Resolution 50 declaring an arms embargo in the region, neither side respected it. When the ship arrived the Israeli government, headed by Ben-Gurion, was adamant in its demand that the Irgun surrender and hand over all of the weapons. Ben-Gurion said: "We must decide whether to hand over power to Begin or to order him to cease his activities. If he does not do so, we will open fire! Otherwise, we must decide to disperse our own army." There were two confrontations between the newly formed IDF and the Irgun: when Altalena reached Kfar Vitkin in the late afternoon of Sunday, June 20 many Irgun militants, including Begin, waited on the shore. A clash with the Alexandroni Brigade, commanded by Dan Even (Epstein), occurred. Fighting ensued and there were a number of casualties on both sides. The clash ended in a ceasefire and the transfer of the weapons on shore to the local IDF commander, and with the ship, now reinforced with local Irgun members, including Begin, sailing to Tel Aviv, where the Irgun had more supporters. Many Irgun members, who joined the IDF earlier that month, left their bases and concentrated on the Tel Aviv beach. A confrontation between them and the IDF units started. In response, Ben-Gurion ordered Yigael Yadin (acting Chief of Staff) to concentrate large forces on the Tel Aviv beach and to take the ship by force. Heavy guns were transferred to the area and at four in the afternoon, Ben-Gurion ordered the shelling of the Altalena. One of the shells hit the ship, which began to burn. Sixteen Irgun fighters were killed in the confrontation with the army; six were killed in the Kfar Vitkin area and ten on Tel Aviv beach. Three IDF soldiers were killed: two at Kfar Vitkin and one in Tel Aviv. After the shelling of the Altalena, more than 200 Irgun fighters were arrested. Most of them were freed several weeks later. The Irgun militants were then fully integrated with the IDF and not kept in separate units. The initial agreement for the integration of the Irgun into the IDF did not include Jerusalem, where a small remnant of the Irgun called the Jerusalem Battalion, numbering around 400 fighters, and Lehi, continued to operate independently of the government. Following the assassination of UN Envoy for Peace Folke Bernadotte by Lehi in September 1948, the Israeli government determined to immediately dismantle the underground organizations. An ultimatum was issued to the Irgun to liquidate as an independent organization and integrate into the IDF or be destroyed, and Israeli troops surrounded the Irgun camp in the Katamon Quarter of Jerusalem. The Irgun accepted the ultimatum on September 22, 1948, and shortly afterward the remaining Irgun fighters in Jerusalem began enlisting in the IDF and turning over their arms. At Begin's orders, the Irgun in the diaspora formally disbanded on January 12, 1949, with the Irgun's former Paris headquarters becoming the European bureau of the Herut movement. In order to increase the popularity of the Irgun organization and ideology, Irgun employed propaganda. This propaganda was mainly aimed at the British, and included the idea of Eretz Israel. According to Irgun propaganda posters, the Jewish state was not only to encompass all of Mandatory Palestine, but also The Emirate of Transjordan. When the Labour party came into power in Britain in July 1945, Irgun published an announcement entitled, "We shall give the Labour Government a Chance to Keep Its Word." In this publication, Irgun stated, "Before it came to power, this Party undertook to return the Land of Israel to the people of Israel as a free state... Men and parties in opposition or in their struggle with their rivals, have, for twenty-five years, made us many promises and undertaken clear obligations; but, on coming to power, they have gone back on their words." Another publication, which followed a British counter-offensive against Jewish organizations in Palestine, Irgun released a document titled, "Mobilize the Nation!" Irgun used this publication to paint the British regime as hostile to the Jewish people, even comparing the British to the Nazis. In response to what was seen as British aggression, Irgun called for a Hebrew Provisional Government, and a Hebrew Liberation Army. References to the Irgun as a terrorist organization came from sources including the Anglo-American Committee of Inquiry, newspapers and a number of prominent world and Jewish figures. Leaders within the mainstream Jewish organizations, the Jewish Agency, Haganah and Histadrut, as well as the British authorities, routinely condemned Irgun operations as terrorism and branded it an illegal organization as a result of the group's attacks on civilian targets. However, privately at least the Haganah kept a dialogue with the dissident groups. Ironically, in early 1947, "the British army in Mandate Palestine banned the use of the term 'terrorist' to refer to the Irgun zvai Leumi ... because it implied that British forces had reason to be terrified." Irgun attacks prompted a formal declaration from the World Zionist Congress in 1946, which strongly condemned "the shedding of innocent blood as a means of political warfare." The Israeli government, in September 1948, acting in response to the assassination of Count Folke Bernadotte, outlawed the Irgun and Lehi groups, declaring them terrorist organizations under the Prevention of Terrorism Ordinance. In 1948, The New York Times published a letter signed by a number of prominent Jewish figures including Hannah Arendt, Albert Einstein, Sidney Hook, and Rabbi Jessurun Cardozo, which described Irgun as "a terrorist, right-wing, chauvinist organization in Palestine". The letter went on to state that Irgun and the Stern gang "inaugurated a reign of terror in the Palestine Jewish community. Teachers were beaten up for speaking against them, adults were shot for not letting their children join them. By gangster methods, beatings, window-smashing, and widespread robberies, the terrorists intimidated the population and exacted a heavy tribute." Soon after World War II, Winston Churchill said "we should never have stopped immigration before the war", but that the Irgun were "the vilest gangsters" and that he would "never forgive the Irgun terrorists." In 2006, Simon McDonald, the British ambassador in Tel Aviv, and John Jenkins, the Consul-General in Jerusalem, wrote in response to a pro-Irgun commemoration of the King David Hotel bombing: "We do not think that it is right for an act of terrorism, which led to the loss of many lives, to be commemorated." They also called for the removal of plaques at the site which presented as a fact that the deaths were due to the British ignoring warning calls. The plaques, in their original version, read: Warning phone calls had been made urging the hotel's occupants to leave immediately. For reasons known only to the British the hotel was not evacuated and after 25 minutes the bombs exploded, and to the Irgun's regret and dismay 91 persons were killed. McDonald and Jenkins said that no such warning calls were made, adding that even if they had, "this does not absolve those who planted the bomb from responsibility for the deaths." Bruce Hoffman states: "Unlike many terrorist groups today, the Irgun's strategy was not deliberately to target or wantonly harm civilians." Max Abrahms writes that the Irgun "pioneered the practice of issuing pre-attack warnings to spare civilians", which was later emulated by the African National Congress (ANC) and other groups and proved "effective but not foolproof". In addition, Begin ordered attacks to take place at night and even during Shabbat to reduce the likelihood of civilian casualties. U.S. military intelligence found that "the Irgun Zvai Leumi is waging a general war against the government and at all times took special care not to cause damage or injury to persons". Although the King David Hotel bombing is widely considered a prima facie case of Irgun terrorism, Abrahms comments: "But this hotel wasn't a normal hotel. It served as the headquarters for the British Armed Forces in Palestine. By all accounts, the intent wasn't to harm civilians." Ha'aretz columnist and Israeli historian Tom Segev wrote of the Irgun: "In the second half of 1940, a few members of the Irgun Zvai Leumi (National Military Organization) – the anti-British terrorist group sponsored by the Revisionists and known by its acronym Etzel, and to the British simply as the Irgun – made contact with representatives of Fascist Italy, offering to cooperate against the British." Clare Hollingworth, the Daily Telegraph and The Scotsman correspondent in Jerusalem during 1948 wrote several outspoken reports after spending several weeks in West Jerusalem: 'The shopkeepers are afraid not so much of shells as of raids by Irgun Zvai Leumi and the Stern Gang. These young toughs, who are beyond whatever law there is have cleaned out most private houses of the richer classes & started to prey upon the shopkeepers.' A US military intelligence report, dated January 1948, described Irgun recruiting tactics amongst Displaced Persons (DP) in the camps across Germany: Alan Dershowitz wrote in his book The Case for Israel that unlike the Haganah, the policy of the Irgun had been to encourage the flight of local Arabs.
[ { "paragraph_id": 0, "text": "The Irgun (Hebrew: ארגון; full title: Hebrew: הארגון הצבאי הלאומי בארץ ישראל Hā-ʾIrgun Ha-Tzvaʾī Ha-Leūmī b-Ērētz Yiśrāʾel, lit. \"The National Military Organization in the Land of Israel\"), or Etzel (Hebrew: אצ\"ל), was a Zionist paramilitary organization that operated in Mandate Palestine and then Israel between 1931 and 1948. It was an offshoot of the older and larger Jewish paramilitary organization Haganah (Hebrew: Hebrew: הגנה, Defence). The Irgun has been viewed as a terrorist organization or organization which carried out terrorist acts.", "title": "" }, { "paragraph_id": 1, "text": "The Irgun policy was based on what was then called Revisionist Zionism founded by Ze'ev Jabotinsky. According to Howard Sachar, \"The policy of the new organization was based squarely on Jabotinsky's teachings: every Jew had the right to enter Palestine; only active retaliation would deter the Arabs; only Jewish armed force would ensure the Jewish state\".", "title": "" }, { "paragraph_id": 2, "text": "Two of the operations for which the Irgun is best known are the bombing of the King David Hotel in Jerusalem on 22 July 1946 and the Deir Yassin massacre that killed at least 107 Palestinian Arab villagers, including women and children, carried out together with Lehi on 9 April 1948.", "title": "" }, { "paragraph_id": 3, "text": "The organization committed acts of terrorism against the British, whom it regarded as illegal occupiers, and against Arabs. In particular the Irgun was described as a terrorist organization by the United Nations, British, and United States governments; in media such as The New York Times newspaper; as well as by the Anglo-American Committee of Inquiry, the 1946 Zionist Congress and the Jewish Agency. However, academics such as Bruce Hoffman and Max Abrahms have written that the Irgun went to considerable lengths to avoid harming civilians, such as issuing pre-attack warnings; according to Hoffman, Irgun leadership urged \"targeting the physical manifestations of British rule while avoiding the deliberate infliction of bloodshed.\" Albert Einstein, in a letter to The New York Times in 1948, compared Irgun and its successor Herut party to \"Nazi and Fascist parties\" and described it as a \"terrorist, right wing, chauvinist organization\". Irgun's tactics appealed to many Jews who believed that any action taken in the cause of the creation of a Jewish state was justified, including terrorism.", "title": "" }, { "paragraph_id": 4, "text": "Irgun members were absorbed into the Israel Defense Forces at the start of the 1948 Arab–Israeli war. The Irgun was a political predecessor to Israel's right-wing Herut (or \"Freedom\") party, which led to today's Likud party. Likud has led or been part of most Israeli governments since 1977.", "title": "" }, { "paragraph_id": 5, "text": "Members of the Irgun came mostly from Betar and from the Revisionist Party both in Palestine and abroad. The Revisionist Movement made up a popular backing for the underground organization. Ze'ev Jabotinsky, founder of Revisionist Zionism, commanded the organization until he died in 1940. He formulated the general realm of operation, regarding Restraint and the end thereof, and was the inspiration for the organization overall. An additional major source of ideological inspiration was the poetry of Uri Zvi Greenberg. The symbol of the organization, with the motto רק כך (only thus), underneath a hand holding a rifle in the foreground of a map showing both Mandatory Palestine and the Emirate of Transjordan (at the time, both were administered under the terms of the British Mandate for Palestine), implied that force was the only way to \"liberate the homeland.\"", "title": "History" }, { "paragraph_id": 6, "text": "The number of members of the Irgun varied from a few hundred to a few thousand. Most of its members were people who joined the organization's command, under which they carried out various operations and filled positions, largely in opposition to British law. Most of them were \"ordinary\" people, who held regular jobs, and only a few dozen worked full-time in the Irgun.", "title": "History" }, { "paragraph_id": 7, "text": "The Irgun disagreed with the policy of the Yishuv and with the World Zionist Organization, both with regard to strategy and basic ideology and with regard to PR and military tactics, such as use of armed force to accomplish the Zionist ends, operations against the Arabs during the riots, and relations with the British mandatory government. Therefore, the Irgun tended to ignore the decisions made by the Zionist leadership and the Yishuv's institutions. This fact caused the elected bodies not to recognize the independent organization, and during most of the time of its existence the organization was seen as irresponsible, and its actions thus worthy of thwarting. Accordingly, the Irgun accompanied its armed operations with public-relations campaigns aiming to convince the public of the Irgun's way and the problems with the official political leadership of the Yishuv. The Irgun put out numerous advertisements, an underground newspaper and even ran the first independent Hebrew radio station – Kol Zion HaLochemet.", "title": "History" }, { "paragraph_id": 8, "text": "As members of an underground armed organization, Irgun personnel did not normally call Irgun by its name but rather used other names. In the first years of its existence it was known primarily as Ha-Haganah Leumit (The National Defense), and also by names such as Haganah Bet (\"Second Defense\"), Irgun Bet (\"Second Irgun\"), the Parallel Organization and the Rightwing Organization. Later on it became most widely known as המעמד (the Stand). The anthem adopted by the Irgun was \"Anonymous Soldiers\", written by Avraham (Yair) Stern who was at the time a commander in the Irgun. Later on Stern defected from the Irgun and founded Lehi, and the song became the anthem of the Lehi. The Irgun's new anthem then became the third verse of the \"Betar Song\", by Ze'ev Jabotinsky.", "title": "History" }, { "paragraph_id": 9, "text": "The Irgun gradually evolved from its humble origins into a serious and well-organized paramilitary organization. The movement developed a hierarchy of ranks and a sophisticated command-structure, and came to demand serious military training and strict discipline from its members. It developed clandestine networks of hidden arms-caches and weapons-production workshops, safe-houses, and training camps, along with a secret printing facility for propaganda posters.", "title": "History" }, { "paragraph_id": 10, "text": "The ranks of the Irgun were (in ascending order):", "title": "History" }, { "paragraph_id": 11, "text": "The Irgun was led by a High Command, which set policy and gave orders. Directly underneath it was a General Staff, which oversaw the activities of the Irgun. The General Staff was divided into a military and a support staff. The military staff was divided into operational units that oversaw operations and support units in charge of planning, instruction, weapons caches and manufacture, and first aid. The military and support staff never met jointly; they communicated through the High Command. Beneath the General Staff were six district commands: Jerusalem, Tel Aviv, Haifa-Galilee, Southern, Sharon, and Shomron, each led by a district commander. A local Irgun district unit was called a \"Branch\". A \"brigade\" in the Irgun was made up of three sections. A section was made up of two groups, at the head of each was a \"Group Head\", and a deputy. Eventually, various units were established, which answered to a \"Center\" or \"Staff\".", "title": "History" }, { "paragraph_id": 12, "text": "The head of the Irgun High Command was the overall commander of the organization, but the designation of his rank varied. During the revolt against the British, Irgun commander Menachem Begin and the entire High Command held the rank of Gundar Rishon. His predecessors, however, had held their own ranks. A rank of Military Commander (Seren) was awarded to the Irgun commander Yaakov Meridor and a rank of High Commander (Aluf) to David Raziel. Until his death in 1940, Jabotinsky was known as the \"Military Commander of the Etzel\" or the Ha-Matzbi Ha-Elyon (\"Supreme Commander\").", "title": "History" }, { "paragraph_id": 13, "text": "Under the command of Menachem Begin, the Irgun was divided into different corps:", "title": "History" }, { "paragraph_id": 14, "text": "The Irgun's commanders planned for it to have a regular combat force, a reserve, and shock units, but in practice there were not enough personnel for a reserve or for a shock force.", "title": "History" }, { "paragraph_id": 15, "text": "The Irgun emphasized that its fighters be highly disciplined. Strict drill exercises were carried out at ceremonies at different times, and strict attention was given to discipline, formal ceremonies and military relationships between the various ranks. The Irgun put out professional publications on combat doctrine, weaponry, leadership, drill exercises, etc. Among these publications were three books written by David Raziel, who had studied military history, techniques, and strategy:", "title": "History" }, { "paragraph_id": 16, "text": "A British analysis noted that the Irgun's discipline was \"as strict as any army in the world.\"", "title": "History" }, { "paragraph_id": 17, "text": "The Irgun operated a sophisticated recruitment and military-training regime. Those wishing to join had to find and make contact with a member, meaning only those who personally knew a member or were persistent could find their way in. Once contact had been established, a meeting was set up with the three-member selection committee at a safe-house, where the recruit was interviewed in a darkened room, with the committee either positioned behind a screen, or with a flashlight shone into the recruit's eyes. The interviewers asked basic biographical questions, and then asked a series of questions designed to weed out romantics and adventurers and those who had not seriously contemplated the potential sacrifices. Those selected attended a four-month series of indoctrination seminars in groups of five to ten, where they were taught the Irgun's ideology and the code of conduct it expected of its members. These seminars also had another purpose - to weed out the impatient and those of flawed purpose who had gotten past the selection interview. Then, members were introduced to other members, were taught the locations of safe-houses, and given military training. Irgun recruits trained with firearms, hand grenades, and were taught how to conduct combined attacks on targets. Arms handling and tactics courses were given in clandestine training camps, while practice shooting took place in the desert or by the sea. Eventually, separate training camps were established for heavy-weapons training. The most rigorous course was the explosives course for bomb-makers, which lasted a year. The British authorities believed that some Irgun members enlisted in the Jewish section of the Palestine Police Force for a year as part of their training, during which they also passed intelligence. In addition to the Irgun's sophisticated training program, many Irgun members were veterans of the Haganah (including the Palmach), the British Armed Forces, and Jewish partisan groups that had waged guerrilla warfare in Nazi-occupied Europe, thus bringing significant military training and combat experience into the organization. The Irgun also operated a course for its intelligence operatives, in which recruits were taught espionage, cryptography, and analysis techniques.", "title": "History" }, { "paragraph_id": 18, "text": "Of the Irgun's members, almost all were part-time members. They were expected to maintain their civilian lives and jobs, dividing their time between their civilian lives and underground activities. There were never more than 40 full-time members, who were given a small expense stipend on which to live on. Upon joining, every member received an underground name. The Irgun's members were divided into cells, and worked with the members of their own cells. The identities of Irgun members in other cells were withheld. This ensured that an Irgun member taken prisoner could betray no more than a few comrades.", "title": "History" }, { "paragraph_id": 19, "text": "In addition to the Irgun's members in Palestine, underground Irgun cells composed of local Jews were established in Europe following World War II. An Irgun cell was also established in Shanghai, home to many European-Jewish refugees. The Irgun also set up a Swiss bank account. Eli Tavin, the former head of Irgun intelligence, was appointed commander of the Irgun abroad.", "title": "History" }, { "paragraph_id": 20, "text": "In November 1947, the Jewish insurgency came to an end as the UN approved of the partition of Palestine, and the British had announced their intention to withdraw the previous month. As the British left and the 1947-48 Civil War in Mandatory Palestine got underway, the Irgun came out of the underground and began to function more as a standing army rather an underground organization. It began openly recruiting, training, and raising funds, and established bases, including training facilities. It also introduced field communications and created a medical unit and supply service.", "title": "History" }, { "paragraph_id": 21, "text": "Until World War II the group armed itself with weapons purchased in Europe, primarily Italy and Poland, and smuggled to Palestine. The Irgun also established workshops that manufactured spare parts and attachments for the weapons. Also manufactured were land mines and simple hand grenades. Another way in which the Irgun armed itself was theft of weapons from the British Police and military.", "title": "History" }, { "paragraph_id": 22, "text": "The Irgun's first steps were in the aftermath of the Riots of 1929. In the Jerusalem branch of the Haganah there were feelings of disappointment and internal unrest towards the leadership of the movements and the Histadrut (at that time the organization running the Haganah). These feelings were a result of the view that the Haganah was not adequately defending Jewish interests in the region. Likewise, critics of the leadership spoke out against alleged failures in the number of weapons, readiness of the movement and its policy of restraint and not fighting back. On April 10, 1931, commanders and equipment managers announced that they refused to return weapons to the Haganah that had been issued to them earlier, prior to the Nebi Musa holiday. These weapons were later returned by the commander of the Jerusalem branch, Avraham Tehomi, a.k.a. \"Gideon\". However, the commanders who decided to rebel against the leadership of the Haganah relayed a message regarding their resignations to the Vaad Leumi, and thus this schism created a new independent movement.", "title": "Prior to World War II" }, { "paragraph_id": 23, "text": "The leader of the new underground movement was Avraham Tehomi, alongside other founding members who were all senior commanders in the Haganah, members of Hapoel Hatzair and of the Histadrut. Also among them was Eliyahu Ben Horin, an activist in the Revisionist Party. This group was known as the \"Odessan Gang\", because they previously had been members of the Haganah Ha'Atzmit of Jewish Odessa. The new movement was named Irgun Tsvai Leumi, (\"National Military Organization\") in order to emphasize its active nature in contrast to the Haganah. Moreover, the organization was founded with the desire to become a true military organization and not just a militia as the Haganah was at the time.", "title": "Prior to World War II" }, { "paragraph_id": 24, "text": "In the autumn of that year the Jerusalem group merged with other armed groups affiliated with Betar. The Betar groups' center of activity was in Tel Aviv, and they began their activity in 1928 with the establishment of \"Officers and Instructors School of Betar\". Students at this institution had broken away from the Haganah earlier, for political reasons, and the new group called itself the \"National Defense\", הגנה הלאומית. During the riots of 1929 Betar youth participated in the defense of Tel Aviv neighborhoods under the command of Yermiyahu Halperin, at the behest of the Tel Aviv city hall. After the riots the Tel Avivian group expanded, and was known as \"The Right Wing Organization\".", "title": "Prior to World War II" }, { "paragraph_id": 25, "text": "After the Tel Aviv expansion another branch was established in Haifa. Towards the end of 1932 the Haganah branch of Safed also defected and joined the Irgun, as well as many members of the Maccabi sports association. At that time the movement's underground newsletter, Ha'Metsudah (the Fortress) also began publication, expressing the active trend of the movement. The Irgun also increased its numbers by expanding draft regiments of Betar – groups of volunteers, committed to two years of security and pioneer activities. These regiments were based in places that from which stemmed new Irgun strongholds in the many places, including the settlements of Yesod HaMa'ala, Mishmar HaYarden, Rosh Pina, Metula and Nahariya in the north; in the center – Hadera, Binyamina, Herzliya, Netanya and Kfar Saba, and south of there – Rishon LeZion, Rehovot and Ness Ziona. Later on regiments were also active in the Old City of Jerusalem (\"the Kotel Brigades\") among others. Primary training centers were based in Ramat Gan, Qastina (by Kiryat Mal'akhi of today) and other places.", "title": "Prior to World War II" }, { "paragraph_id": 26, "text": "In 1933 there were some signs of unrest, seen by the incitement of the local Arab leadership to act against the authorities. The strong British response put down the disturbances quickly. During that time the Irgun operated in a similar manner to the Haganah and was a guarding organization. The two organizations cooperated in ways such as coordination of posts and even intelligence sharing.", "title": "Prior to World War II" }, { "paragraph_id": 27, "text": "Within the Irgun, Tehomi was the first to serve as \"Head of the Headquarters\" or \"Chief Commander\". Alongside Tehomi served the senior commanders, or \"Headquarters\" of the movement. As the organization grew, it was divided into district commands.", "title": "Prior to World War II" }, { "paragraph_id": 28, "text": "In August 1933 a \"Supervisory Committee\" for the Irgun was established, which included representatives from most of the Zionist political parties. The members of this committee were Meir Grossman (of the Hebrew State Party), Rabbi Meir Bar-Ilan (of the Mizrachi Party), either Immanuel Neumann or Yehoshua Supersky (of the General Zionists) and Ze'ev Jabotinsky or Eliyahu Ben Horin (of Hatzohar).", "title": "Prior to World War II" }, { "paragraph_id": 29, "text": "In protest against, and with the aim of ending Jewish immigration to Palestine, the Great Arab Revolt of 1936–1939 broke out on April 19, 1936. The riots took the form of attacks by Arab rioters ambushing main roads, bombing of roads and settlements as well as property and agriculture vandalism. In the beginning, the Irgun and the Haganah generally maintained a policy of restraint, apart from a few instances. Some expressed resentment at this policy, leading up internal unrest in the two organizations. The Irgun tended to retaliate more often, and sometimes Irgun members patrolled areas beyond their positions in order to encounter attackers ahead of time. However, there were differences of opinion regarding what to do in the Haganah, as well. Due to the joining of many Betar Youth members, Jabotinsky (founder of Betar) had a great deal of influence over Irgun policy. Nevertheless, Jabotinsky was of the opinion that for moral reasons violent retaliation was not to be undertaken.", "title": "Prior to World War II" }, { "paragraph_id": 30, "text": "In November 1936 the Peel Commission was sent to inquire regarding the breakout of the riots and propose a solution to end the Revolt. In early 1937 there were still some in the Yishuv who felt the commission would recommend a partition of Mandatory Palestine (the land west of the Jordan River), thus creating a Jewish state on part of the land. The Irgun leadership, as well as the \"Supervisory Committee\" held similar beliefs, as did some members of the Haganah and the Jewish Agency. This belief strengthened the policy of restraint and led to the position that there was no room for defense institutions in the future Jewish state. Tehomi was quoted as saying: \"We stand before great events: a Jewish state and a Jewish army. There is a need for a single military force\". This position intensified the differences of opinion regarding the policy of restraint, both within the Irgun and within the political camp aligned with the organization. The leadership committee of the Irgun supported a merger with the Haganah. On April 24, 1937, a referendum was held among Irgun members regarding its continued independent existence. David Raziel and Avraham (Yair) Stern came out publicly in support for the continued existence of the Irgun:", "title": "Prior to World War II" }, { "paragraph_id": 31, "text": "In April 1937 the Irgun split after the referendum. Approximately 1,500–2,000 people, about half of the Irgun's membership, including the senior command staff, regional committee members, along with most of the Irgun's weapons, returned to the Haganah, which at that time was under the Jewish Agency's leadership. The Supervisory Committee's control over the Irgun ended, and Jabotinsky assumed command. In their opinion, the removal of the Haganah from the Jewish Agency's leadership to the national institutions necessitated their return. Furthermore, they no longer saw significant ideological differences between the movements. Those who remained in the Irgun were primarily young activists, mostly laypeople, who sided with the independent existence of the Irgun. In fact, most of those who remained were originally Betar people. Moshe Rosenberg estimated that approximately 1,800 members remained. In theory, the Irgun remained an organization not aligned with a political party, but in reality the supervisory committee was disbanded and the Irgun's continued ideological path was outlined according to Ze'ev Jabotinsky's school of thought and his decisions, until the movement eventually became Revisionist Zionism's military arm. One of the major changes in policy by Jabotinsky was the end of the policy of restraint.", "title": "Prior to World War II" }, { "paragraph_id": 32, "text": "On April 27, 1937, the Irgun founded a new headquarters, staffed by Moshe Rosenberg at the head, Avraham (Yair) Stern as secretary, David Raziel as head of the Jerusalem branch, Hanoch Kalai as commander of Haifa and Aharon Haichman as commander of Tel Aviv. On 20 Tammuz, (June 29) the day of Theodor Herzl's death, a ceremony was held in honor of the reorganization of the underground movement. For security purposes this ceremony was held at a construction site in Tel Aviv.", "title": "Prior to World War II" }, { "paragraph_id": 33, "text": "Ze'ev Jabotinsky placed Col. Robert Bitker at the head of the Irgun. Bitker had previously served as Betar commissioner in China and had military experience. A few months later, probably due to total incompatibility with the position, Jabotinsky replaced Bitker with Moshe Rosenberg. When the Peel Commission report was published a few months later, the Revisionist camp decided not to accept the commission's recommendations. Moreover, the organizations of Betar, Hatzohar and the Irgun began to increase their efforts to bring Jews to Eretz Israel (the Land of Israel), illegally. This Aliyah was known as the עליית אף על פי \"Af Al Pi (Nevertheless) Aliyah\". As opposed to this position, the Jewish Agency began acting on behalf of the Zionist interest on the political front, and continued the policy of restraint. From this point onwards the differences between the Haganah and the Irgun were much more obvious.", "title": "Prior to World War II" }, { "paragraph_id": 34, "text": "According to Jabotinsky's \"Evacuation Plan\", which called for millions of European Jews to be brought to Palestine at once, the Irgun helped the illegal immigration of European Jews to Palestine. This was named by Jabotinsky the \"National Sport\". The most significant part of this immigration prior to World War II was carried out by the Revisionist camp, largely because the Yishuv institutions and the Jewish Agency shied away from such actions on grounds of cost and their belief that Britain would in the future allow widespread Jewish immigration.", "title": "Prior to World War II" }, { "paragraph_id": 35, "text": "The Irgun joined forces with Hatzohar and Betar in September 1937, when it assisted with the landing of a convoy of 54 Betar members at Tantura Beach (near Haifa.) The Irgun was responsible for discreetly bringing the Olim, or Jewish immigrants, to the beaches, and dispersing them among the various Jewish settlements. The Irgun also began participating in the organisation of the immigration enterprise and undertook the process of accompanying the ships. This began with the ship Draga which arrived at the coast of British Palestine in September 1938. In August of the same year, an agreement was made between Ari Jabotinsky (the son of Ze'ev Jabotinsky), the Betar representative and Hillel Kook, the Irgun representative, to coordinate the immigration (also known as Ha'apala). This agreement was also made in the \"Paris Convention\" in February 1939, at which Ze'ev Jabotinsky and David Raziel were present. Afterwards, the \"Aliyah Center\" was founded, made up of representatives of Hatzohar, Betar, and the Irgun, thereby making the Irgun a full participant in the process.", "title": "Prior to World War II" }, { "paragraph_id": 36, "text": "The difficult conditions on the ships demanded a high level of discipline. The people on board the ships were often split into units, led by commanders. In addition to having a daily roll call and the distribution of food and water (usually very little of either), organized talks were held to provide information regarding the actual arrival in Palestine. One of the largest ships was the Sakaria, with 2,300 passengers, which equalled about 0.5% of the Jewish population in Palestine. The first vessel arrived on April 13, 1937, and the last on February 13, 1940. All told, about 18,000 Jews immigrated to Palestine with the help of the Revisionist organizations and private initiatives by other Revisionists. Most were not caught by the British.", "title": "Prior to World War II" }, { "paragraph_id": 37, "text": "While continuing to defend settlements, Irgun members began attacks on Arab villages around April 1936, thus ending the policy of restraint. These attacks were intended to instill fear in the Arab side, in order to cause the Arabs to wish for peace and quiet. In March 1938, David Raziel wrote in the underground newspaper \"By the Sword\" a constitutive article for the Irgun overall, in which he coined the term \"Active Defense\":", "title": "Prior to World War II" }, { "paragraph_id": 38, "text": "By the end of World War II, more than 250 Arabs had been killed. Examples include:", "title": "Prior to World War II" }, { "paragraph_id": 39, "text": "During 1936, Irgun members carried out approximately ten attacks.", "title": "Prior to World War II" }, { "paragraph_id": 40, "text": "Throughout 1937 the Irgun continued this line of operation.", "title": "Prior to World War II" }, { "paragraph_id": 41, "text": "A more complete list can be found here.", "title": "Prior to World War II" }, { "paragraph_id": 42, "text": "At that time, however, these acts were not yet a part of a formulated policy of the Irgun. Not all of the aforementioned operations received a commander's approval, and Jabotinsky was not in favor of such actions at the time. Jabotinsky still hoped to establish a Jewish force out in the open that would not have to operate underground. However, the failure, in its eyes, of the Peel Commission and the renewal of violence on the part of the Arabs caused the Irgun to rethink its official policy.", "title": "Prior to World War II" }, { "paragraph_id": 43, "text": "14 November 1937 was a watershed in Irgun activity. From that date, the Irgun increased its reprisals. Following an increase in the number of attacks aimed at Jews, including the killing of five kibbutz members near Kiryat Anavim (today kibbutz Ma'ale HaHamisha), the Irgun undertook a series of attacks in various places in Jerusalem, killing five Arabs. Operations were also undertaken in Haifa (shooting at the Arab-populated Wadi Nisnas neighborhood) and in Herzliya. The date is known as the day the policy of restraint (Havlagah) ended, or as Black Sunday when operations resulted in the murder of 10 Arabs. This is when the organization fully changed its policy, with the approval of Jabotinsky and Headquarters to the policy of \"active defense\" in respect of Irgun actions.", "title": "Prior to World War II" }, { "paragraph_id": 44, "text": "The British responded with the arrest of Betar and Hatzohar members as suspected members of the Irgun. Military courts were allowed to act under \"Time of Emergency Regulations\" and even sentence people to death. In this manner Yehezkel Altman, a guard in a Betar battalion in the Nahalat Yizchak neighborhood of Tel Aviv, shot at an Arab bus, without his commanders' knowledge. Altman was acting in response to a shooting at Jewish vehicles on the Tel Aviv–Jerusalem road the day before. He turned himself in later and was sentenced to death, a sentence which was later commuted to a life sentence.", "title": "Prior to World War II" }, { "paragraph_id": 45, "text": "Despite the arrests, Irgun members continued fighting. Jabotinsky lent his moral support to these activities. In a letter to Moshe Rosenberg on 18 March 1938 he wrote:", "title": "Prior to World War II" }, { "paragraph_id": 46, "text": "Although the Irgun continued activities such as these, following Rosenberg's orders, they were greatly curtailed. Furthermore, in fear of the British threat of the death sentence for anyone found carrying a weapon, all operations were suspended for eight months. However, opposition to this policy gradually increased. In April, 1938, responding to the killing of six Jews, Betar members from the Rosh Pina Brigade went on a reprisal mission, without the consent of their commander, as described by historian Avi Shlaim:", "title": "Prior to World War II" }, { "paragraph_id": 47, "text": "Although the incident ended without casualties, the three were caught, and one of them – Shlomo Ben-Yosef was sentenced to death. Demonstrations around the country, as well as pressure from institutions and people such as Dr. Chaim Weizmann and the Chief Rabbi of Mandatory Palestine, Yitzhak HaLevi Herzog did not reduce his sentence. In Shlomo Ben-Yosef's writings in Hebrew were later found:", "title": "Prior to World War II" }, { "paragraph_id": 48, "text": "On 29 June 1938 he was executed, and was the first of the Olei Hagardom. The Irgun revered him after his death and many regarded him as an example. In light of this, and due to the anger of the Irgun leadership over the decision to adopt a policy of restraint until that point, Jabotinsky relieved Rosenberg of his post and replaced him with David Raziel, who proved to be the most prominent Irgun commander until Menachem Begin. Jabotinsky simultaneously instructed the Irgun to end its policy of restraint, leading to armed offensive operations until the end of the Arab Revolt in 1939. In this time, the Irgun mounted about 40 operations against Arabs and Arab villages, for instance:", "title": "Prior to World War II" }, { "paragraph_id": 49, "text": "This action led the British Parliament to discuss the disturbances in Palestine. On 23 February 1939 the Secretary of State for the Colonies, Malcolm MacDonald revealed the British intention to cancel the mandate and establish a state that would preserve Arab rights. This caused a wave of riots and attacks by Arabs against Jews. The Irgun responded four days later with a series of attacks on Arab buses and other sites. The British used military force against the Arab rioters and in the latter stages of the revolt by the Arab community in Palestine, it deteriorated into a series of internal gang wars.", "title": "Prior to World War II" }, { "paragraph_id": 50, "text": "At the same time, the Irgun also established itself in Europe. The Irgun built underground cells that participated in organizing migration to Palestine. The cells were made up almost entirely of Betar members, and their primary activity was military training in preparation for emigration to Palestine. Ties formed with the Polish authorities brought about courses in which Irgun commanders were trained by Polish officers in advanced military issues such as guerrilla warfare, tactics and laying land mines. Avraham (Yair) Stern was notable among the cell organizers in Europe. In 1937 the Polish authorities began to deliver large amounts of weapons to the underground. According to Irgun activists Poland supplied the organization with 25,000 rifles, and additional material and weapons, by summer 1939 the Warsaw warehouses of Irgun held 5,000 rifles and 1,000 machine guns. The training and support by Poland would allow the organization to mobilize 30,000-40,000 men The transfer of handguns, rifles, explosives and ammunition stopped with the outbreak of World War II. Another field in which the Irgun operated was the training of pilots, so they could serve in the Air Force in the future war for independence, in the flight school in Lod.", "title": "Prior to World War II" }, { "paragraph_id": 51, "text": "Towards the end of 1938 there was progress towards aligning the ideologies of the Irgun and the Haganah. Many abandoned the belief that the land would be divided and a Jewish state would soon exist. The Haganah founded פו\"מ, a special operations unit, (pronounced poom), which carried out reprisal attacks following Arab violence. These operations continued into 1939. Furthermore, the opposition within the Yishuv to illegal immigration significantly decreased, and the Haganah began to bring Jews to Palestine using rented ships, as the Irgun had in the past.", "title": "Prior to World War II" }, { "paragraph_id": 52, "text": "The publishing of the MacDonald White Paper of 1939 brought with it new edicts that were intended to lead to a more equitable settlement between Jews and Arabs. However, it was considered by some Jews to have an adverse effect on the continued development of the Jewish community in Palestine. Chief among these was the prohibition on selling land to Jews, and the smaller quotas for Jewish immigration. The entire Yishuv was furious at the contents of the White Paper. There were demonstrations against the \"Treacherous Paper\", as it was considered that it would preclude the establishment of a Jewish homeland in Palestine.", "title": "Prior to World War II" }, { "paragraph_id": 53, "text": "Under the temporary command of Hanoch Kalai, the Irgun began sabotaging strategic infrastructure such as electricity facilities, radio and telephone lines. It also started publicizing its activity and its goals. This was done in street announcements, newspapers, as well as the underground radio station Kol Zion HaLochemet. On August 26, 1939, the Irgun killed Ralph Cairns, a British police officer who, as head of the Jewish Department in the Palestine Police, had been closing the net on Avraham Stern. Irgun had accused him of the torture of a number of its members. Cairns and Ronald Barker, another British police officer, were killed by a remotely detonated Irgun landmine.", "title": "Prior to World War II" }, { "paragraph_id": 54, "text": "The British increased their efforts against the Irgun. As a result, on August 31 the British police arrested members meeting in the Irgun headquarters. On the next day, September 1, 1939, World War II broke out.", "title": "Prior to World War II" }, { "paragraph_id": 55, "text": "Following the outbreak of war, Ze'ev Jabotinsky and the New Zionist Organization voiced their support for Britain and France. In mid-September 1939 Raziel was moved from his place of detention in Tzrifin. This, among other events, encouraged the Irgun to announce a cessation of its activities against the British so as not to hinder Britain's effort to fight \"the Hebrew's greatest enemy in the world – German Nazism\". This announcement ended with the hope that after the war a Hebrew state would be founded \"within the historical borders of the liberated homeland\". After this announcement Irgun, Betar and Hatzohar members, including Raziel and the Irgun leadership, were gradually released from detention. The Irgun did not rule out joining the British army and the Jewish Brigade. Irgun members did enlist in various British units. Irgun members also assisted British forces with intelligence in Romania, Bulgaria, Morocco and Tunisia. An Irgun unit also operated in Syria and Lebanon. David Raziel later died during one of these operations.", "title": "During World War II" }, { "paragraph_id": 56, "text": "During the Holocaust, Betar members revolted numerous times against the Nazis in occupied Europe. The largest of these revolts was the Warsaw Ghetto Uprising, in which an armed underground organization fought, formed by Betar and Hatzoar and known as the Żydowski Związek Wojskowy (ŻZW) (Jewish Military Union). Despite its political origins, the ŻZW accepted members without regard to political affiliation, and had contacts established before the war with elements of the Polish military. Because of differences over objectives and strategy, the ŻZW was unable to form a common front with the mainstream ghetto fighters of the Żydowska Organizacja Bojowa, and fought independently under the military leadership of Paweł Frenkiel and the political leadership of Dawid Wdowiński.", "title": "During World War II" }, { "paragraph_id": 57, "text": "There were instances of Betar members enlisted in the British military smuggling British weapons to the Irgun.", "title": "During World War II" }, { "paragraph_id": 58, "text": "From 1939 onwards, an Irgun delegation in the United States worked for the creation of a Jewish army made up of Jewish refugees and Jews from Palestine, to fight alongside the Allied Forces. In July 1943 the \"Emergency Committee to Save the Jewish People in Europe\" was formed, and worked until the end of the war to rescue the Jews of Europe from the Nazis and to garner public support for a Jewish state. However, it was not until January 1944 that US President Franklin Roosevelt established the War Refugee Board, which achieved some success in saving European Jews.", "title": "During World War II" }, { "paragraph_id": 59, "text": "Throughout this entire period, the British continued enforcing the White Paper's provisions, which included a ban on the sale of land, restrictions on Jewish immigration and increased vigilance against illegal immigration. Part of the reason why the British banned land sales (to anyone) was the confused state of the post Ottoman land registry; it was difficult to determine who actually owned the land that was for sale.", "title": "During World War II" }, { "paragraph_id": 60, "text": "Within the ranks of the Irgun this created much disappointment and unrest, at the center of which was disagreement with the leadership of the New Zionist Organization, David Raziel and the Irgun Headquarters. On June 18, 1939, Avraham (Yair) Stern and others of the leadership were released from prison and a rift opened between them the Irgun and Hatzohar leadership. The controversy centred on the issues of the underground movement submitting to public political leadership and fighting the British. On his release from prison Raziel resigned from Headquarters. To his chagrin, independent operations of senior members of the Irgun were carried out and some commanders even doubted Raziel's loyalty.", "title": "During World War II" }, { "paragraph_id": 61, "text": "In his place, Stern was elected to the leadership. In the past, Stern had founded secret Irgun cells in Poland without Jabotinsky's knowledge, in opposition to his wishes. Furthermore, Stern was in favor of removing the Irgun from the authority of the New Zionist Organization, whose leadership urged Raziel to return to the command of the Irgun. He finally consented. Jabotinsky wrote to Raziel and to Stern, and these letters were distributed to the branches of the Irgun:", "title": "During World War II" }, { "paragraph_id": 62, "text": "Stern was sent a telegram with an order to obey Raziel, who was reappointed. However, these events did not prevent the splitting of the organization. Suspicion and distrust were rampant among the members. Out of the Irgun a new organization was created on July 17, 1940, which was first named \"The National Military Organization in Israel\" (as opposed to the \"National Military Organization in the Land of Israel\") and later on changed its name to Lehi, an acronym for Lohamei Herut Israel, \"Fighters for the Freedom of Israel\", (לח\"י – לוחמי חירות ישראל). Jabotinsky died in New York on August 4, 1940, yet this did not prevent the Lehi split. Following Jabotinsky's death, ties were formed between the Irgun and the New Zionist Organization. These ties would last until 1944, when the Irgun declared a revolt against the British.", "title": "During World War II" }, { "paragraph_id": 63, "text": "The primary difference between the Irgun and the newly formed organization was its intention to fight the British in Palestine, regardless of their war against Germany. Later, additional operational and ideological differences developed that contradicted some of the Irgun's guiding principles. For example, the Lehi, unlike the Irgun, supported a population exchange with local Arabs.", "title": "During World War II" }, { "paragraph_id": 64, "text": "The split damaged the Irgun both organizationally and from a morale point of view. As their spiritual leader, Jabotinsky's death also added to this feeling. Together, these factors brought about a mass abandonment by members. The British took advantage of this weakness to gather intelligence and arrest Irgun activists. The new Irgun leadership, which included Meridor, Yerachmiel Ha'Levi, Rabbi Moshe Zvi Segal and others used the forced hiatus in activity to rebuild the injured organization. This period was also marked by more cooperation between the Irgun and the Jewish Agency, however David Ben-Gurion's uncompromising demand that Irgun accept the Agency's command foiled any further cooperation.", "title": "During World War II" }, { "paragraph_id": 65, "text": "In both the Irgun and the Haganah more voices were being heard opposing any cooperation with the British. Nevertheless, an Irgun operation carried out in the service of Britain was aimed at sabotaging pro-Nazi forces in Iraq, including the assassination of Haj Amin al-Husayni. Among others, Raziel and Yaakov Meridor participated. On April 20, 1941, during a Luftwaffe air raid on RAF Habbaniya near Baghdad, David Raziel, commander of the Irgun, was killed during the operation.", "title": "During World War II" }, { "paragraph_id": 66, "text": "In late 1943 a joint Haganah – Irgun initiative was developed, to form a single fighting body, unaligned with any political party, by the name of עם לוחם (Fighting Nation). The new body's first plan was to kidnap the British High Commissioner of Palestine, Sir Harold MacMichael and take him to Cyprus. However, the Haganah leaked the planned operation and it was thwarted before it got off the ground. Nevertheless, at this stage the Irgun ceased its cooperation with the British. As Eliyahu Lankin tells in his book:", "title": "During World War II" }, { "paragraph_id": 67, "text": "In 1943 the Polish II Corps, commanded by Władysław Anders, arrived in Palestine from Iraq. The British insisted that no Jewish units of the army be created. Eventually, many of the soldiers of Jewish origin that arrived with the army were released and allowed to stay in Palestine. One of them was Menachem Begin, whose arrival in Palestine created new-found expectations within the Irgun and Betar. Begin had served as head of the Betar movement in Poland, and was a respected leader. Yaakov Meridor, then the commander of the Irgun, raised the idea of appointing Begin to the post. In late 1943, when Begin accepted the position, a new leadership was formed. Meridor became Begin's deputy, and other members of the board were Aryeh Ben Eliezer, Eliyahu Lankin, and Shlomo Lev Ami.", "title": "Revolt" }, { "paragraph_id": 68, "text": "On February 1, 1944, the Irgun put up posters all around the country, proclaiming a revolt against the British mandatory government. The posters began by saying that all of the Zionist movements stood by the Allied Forces and over 25,000 Jews had enlisted in the British military. The hope to establish a Jewish army had died. European Jewry was trapped and was being destroyed, yet Britain, for its part, did not allow any rescue missions. This part of the document ends with the following words:", "title": "Revolt" }, { "paragraph_id": 69, "text": "The Irgun then declared that, for its part, the ceasefire was over and they were now at war with the British. It demanded the transfer of rule to a Jewish government, to implement ten policies. Among these were the mass evacuation of Jews from Europe, the signing of treaties with any state that recognized the Jewish state's sovereignty, including Britain, granting social justice to the state's residents, and full equality to the Arab population. The proclamation ended with:", "title": "Revolt" }, { "paragraph_id": 70, "text": "The Irgun began this campaign rather weakly. At the time of the start of the revolt, it was only about 1,000 strong, including some 200 fighters. It possessed about 4 submachine guns, 40 rifles, 60 pistols, 150 hand grenades, and 2,000 kilograms of explosive material, and its funds were about £800.", "title": "Revolt" }, { "paragraph_id": 71, "text": "The Irgun began a militant operation against the symbols of government, in an attempt to harm the regime's operation as well as its reputation. The first attack was on February 12, 1944, at the government immigration offices, a symbol of the immigration laws. The attacks went smoothly and ended with no casualties—as they took place on a Saturday night, when the buildings were empty—in the three largest cities: Jerusalem, Tel Aviv, and Haifa. On February 27 the income tax offices were bombed. Parts of the same cities were blown up, also on a Saturday night; prior warnings were put up near the buildings. On March 23 the national headquarters building of the British police in the Russian Compound in Jerusalem was attacked, and part of it was blown up. These attacks in the first few months were sharply condemned by the organized leadership of the Yishuv and by the Jewish Agency, who saw them as dangerous provocations.", "title": "Revolt" }, { "paragraph_id": 72, "text": "At the same time the Lehi also renewed its attacks against the British. The Irgun continued to attack police stations and headquarters, and Tegart Fort, a fortified police station (today the location of Latrun). One relatively complex operation was the takeover of the radio station in Ramallah, on May 17, 1944.", "title": "Revolt" }, { "paragraph_id": 73, "text": "One symbolic act by the Irgun happened before Yom Kippur of 1944. They plastered notices around town, warning that no British officers should come to the Western Wall on Yom Kippur, and for the first time since the mandate began no British police officers were there to prevent the Jews from the traditional Shofar blowing at the end of the fast. After the fast that year the Irgun attacked four police stations in Arab settlements. In order to obtain weapons, the Irgun carried out \"confiscation\" operations – they robbed British armouries and smuggled stolen weapons to their own hiding places. During this phase of activity the Irgun also cut all of its official ties with the New Zionist Organization, so as not to tie their fate in the underground organization.", "title": "Revolt" }, { "paragraph_id": 74, "text": "Begin wrote in his memoirs, The Revolt:", "title": "Revolt" }, { "paragraph_id": 75, "text": "In October 1944 the British began expelling hundreds of arrested Irgun and Lehi members to detention camps in Africa. 251 detainees from Latrun were flown on thirteen planes, on October 19 to a camp in Asmara, Eritrea. Eleven additional transports were made. Throughout the period of their detention, the detainees often initiated rebellions and hunger strikes. Many escape attempts were made until July 1948 when the exiles were returned to Israel. While there were numerous successful escapes from the camp itself, only nine men actually made it back all the way. One noted success was that of Yaakov Meridor, who escaped nine times before finally reaching Europe in April 1948. These tribulations were the subject of his book Long is the Path to Freedom: Chronicles of one of the Exiles.", "title": "Revolt" }, { "paragraph_id": 76, "text": "On November 6, 1944, Lord Moyne, British Deputy Resident Minister of State in Cairo was assassinated by Lehi members Eliyahu Hakim and Eliyahu Bet-Zuri. This act raised concerns within the Yishuv from the British regime's reaction to the underground's violent acts against them. Therefore, the Jewish Agency decided on starting a Hunting Season, known as the saison, (from the French \"la saison de chasse\").", "title": "Revolt" }, { "paragraph_id": 77, "text": "The Irgun's recuperation was noticeable when it began to renew its cooperation with the Lehi in May 1945, when it sabotaged oil pipelines, telephone lines and railroad bridges. All in all, over 1,000 members of the Irgun and Lehi were arrested and interned in British camps during the Saison. Eventually the Hunting Season died out, and there was even talk of cooperation with the Haganah leading to the formation of the Jewish Resistance Movement.", "title": "Revolt" }, { "paragraph_id": 78, "text": "Towards the end of July 1945 the Labour party in Britain was elected to power. The Yishuv leadership had high hopes that this would change the anti-Zionist policy that the British maintained at the time. However, these hopes were quickly dashed when the government limited Jewish immigration, with the intention that the population of Mandatory Palestine (the land west of the Jordan River) would not be more than one-third of the total. This, along with the stepping up of arrests and their pursuit of underground members and illegal immigration organizers led to the formation of the Jewish Resistance Movement. This body consolidated the armed resistance to the British of the Irgun, Lehi, and Haganah. For ten months the Irgun and the Lehi cooperated and they carried out nineteen attacks and defense operations. The Haganah and Palmach carried out ten such operations. The Haganah also assisted in landing 13,000 illegal immigrants.", "title": "Revolt" }, { "paragraph_id": 79, "text": "Tension between the underground movements and the British increased with the increase in operations. On April 23, 1946, an operation undertaken by the Irgun to gain weapons from the Tegart fort at Ramat Gan resulted in a firefight with the police in which an Arab constable and two Irgun fighters were killed, including one who jumped on an explosive device to save his comrades. A third fighter, Dov Gruner, was wounded and captured. He stood trial and was sentenced to be death by hanging, refusing to sign a pardon request.", "title": "Revolt" }, { "paragraph_id": 80, "text": "In 1946, British relations with the Yishuv worsened, building up to Operation Agatha of June 29. The authorities ignored the Anglo-American Committee of Inquiry's recommendation to allow 100,000 Jews into Palestine at once. As a result of the discovery of documents tying the Jewish Agency to the Jewish Resistance Movement, the Irgun was asked to speed up the plans for the King David Hotel bombing of July 22. The hotel was where the documents were located, the base for the British Secretariat, the military command and a branch of the Criminal Investigation Division of the police. The Irgun later claimed to have sent a warning that was ignored. Palestinian and U.S. sources confirm that the Irgun issued numerous warnings for civilians to evacuate the hotel prior to the bombing. 91 people were killed in the attack where a 350 kg bomb was placed in the basement of the hotel and caused a large section of it to collapse. Only 13 were British soldiers.", "title": "Revolt" }, { "paragraph_id": 81, "text": "The King David Hotel bombing and the arrest of Jewish Agency and other Yishuv leaders as part of Operation Agatha caused the Haganah to cease their armed activity against the British. Yishuv and Jewish Agency leaders were released from prison. From then until the end of the British mandate, resistance activities were led by the Irgun and Lehi. In early September 1946 the Irgun renewed its attacks against civil structures, railroads, communication lines and bridges. One operation was the attack on the train station in Jerusalem, in which Meir Feinstein was arrested and later committed suicide awaiting execution. According to the Irgun these sort of armed attacks were legitimate, since the trains primarily served the British, for redeployment of their forces. The Irgun also publicized leaflets, in three languages, not to use specific trains in danger of being attacked. For a while, the British stopped train traffic at night. The Irgun also carried out repeated attacks against military and police traffic using disguised, electronically detonated roadside mines which could be detonated by an operator hiding nearby as a vehicle passed, carried out arms raids against military bases and police stations (often disguised as British soldiers), launched bombing, shooting, and mortar attacks against military and police installations and checkpoints, and robbed banks to gain funds as a result of losing access to Haganah funding following the collapse of the Jewish Resistance Movement.", "title": "Revolt" }, { "paragraph_id": 82, "text": "On October 31, 1946, in response to the British barring entry of Jews from Palestine, the Irgun blew up the British Embassy in Rome, a center of British efforts to monitor and stop Jewish immigration. The Irgun also carried out a few other operations in Europe: a British troop train was derailed and an attempt against another troop train failed. An attack on a British officers club in Vienna took place in 1947, and an attack on another British officer's club in Vienna and a sergeant's club in Germany took place in 1948.", "title": "Revolt" }, { "paragraph_id": 83, "text": "In December 1946 a sentence of 18 years and 18 beatings was handed down to a young Irgun member for robbing a bank. The Irgun made good on a threat they made and after the detainee was whipped, Irgun members kidnapped British officers and beat them in public. The operation, known as the \"Night of the Beatings\" brought an end to British punitive beatings. The British, taking these acts seriously, moved many British families in Palestine into the confines of military bases, and some moved home.", "title": "Revolt" }, { "paragraph_id": 84, "text": "On February 14, 1947, Ernest Bevin announced that the Jews and Arabs would not be able to agree on any British proposed solution for the land, and therefore the issue must be brought to the United Nations (UN) for a final decision. The Yishuv thought of the idea to transfer the issue to the UN as a British attempt to achieve delay while a UN inquiry commission would be established, and its ideas discussed, and all the while the Yishuv would weaken. Foundation for Immigration B increased the number of ships bringing in Jewish refugees. The British still strictly enforced the policy of limited Jewish immigration and illegal immigrants were placed in detention camps in Cyprus, which increased the anger of the Jewish community towards the mandate government.", "title": "Revolt" }, { "paragraph_id": 85, "text": "The Irgun stepped up its activity and from February 19 until March 3 it attacked 18 British military camps, convoy routes, vehicles, and other facilities. The most notable of these attacks was the bombing of a British officer's club located in Goldsmith House in Jerusalem, which was in a heavily guarded security zone. Covered by machine-gun fire, an Irgun assault team in a truck penetrated the security zone and lobbed explosives into the building. Thirteen people, including two officers, were killed. As a result, martial law was imposed over much of the country, enforced by approximately 20,000 British soldiers. Despite this, attacks continued throughout the martial law period. The most notable one was an Irgun attack against the Royal Army Pay Corps base at the Schneller Orphanage, in which a British soldier was killed.", "title": "Revolt" }, { "paragraph_id": 86, "text": "Throughout its struggle against the British, the Irgun sought to publicize its cause around the world. By humiliating the British, it attempted to focus global attention on Palestine, hoping that any British overreaction would be widely reported, and thus result in more political pressure against the British. Begin described this strategy as turning Palestine into a \"glass house\". The Irgun also re-established many representative offices internationally, and by 1948 operated in 23 states. In these countries, the Irgun sometimes acted against the local British representatives or led public relations campaigns against Britain. According to Bruce Hoffman: \"In an era long before the advent of 24/7 global news coverage and instantaneous satellite-transmitted broadcasts, the Irgun deliberately attempted to appeal to a worldwide audience far beyond the immediate confines of its local struggle, and beyond even the ruling regime's own homeland.\"", "title": "Revolt" }, { "paragraph_id": 87, "text": "On April 16, 1947, Irgun members Dov Gruner, Yehiel Dresner, Eliezer Kashani, and Mordechai Alkahi were hanged in Acre Prison, while singing Hatikvah. On April 21 Meir Feinstein and Lehi member Moshe Barazani blew themselves up, using a smuggled grenade, hours before their scheduled hanging. And on May 4 one of the Irgun's largest operations took place – the raid on Acre Prison. The operation was carried out by 23 men, commanded by Dov Cohen – AKA \"Shimshon\", along with the help of the Irgun and Lehi prisoners inside the prison. The Irgun had informed them of the plan in advance and smuggled in explosives. After a hole was blasted in the prison wall, the 41 Irgun and Lehi members who had been chosen to escape then ran to the hole, blasting through inner prison gates with the smuggled explosives. Meanwhile, Irgun teams mined roads and launched a mortar attack on a nearby British Army camp to delay the arrival of responding British forces. Although the 41 escapees managed to get out of the prison and board the escape trucks, some were rapidly recaptured and nine of the escapees and attackers were killed. Five Irgun men in the attacking party were also captured. Overall, 27 of the 41 designated escapees managed to escape. Along with the underground movement members, other criminals – including 214 Arabs – also escaped. Of the five attackers who were caught, three of them – Avshalom Haviv, Meir Nakar, and Yaakov Weiss, were sentenced to death.", "title": "Revolt" }, { "paragraph_id": 88, "text": "After the death sentences of the three were confirmed, the Irgun tried to save them by kidnapping hostages — British sergeants Clifford Martin and Mervyn Paice — in the streets of Netanya. British forces closed off and combed the area in search of the two, but did not find them. On July 29, 1947, in the afternoon, Meir Nakar, Avshalom Haviv, and Yaakov Weiss were executed. Approximately thirteen hours later the hostages were hanged in retaliation by the Irgun and their bodies, booby-trapped with an explosive, afterwards strung up from trees in woodlands south of Netanya. This action caused an outcry in Britain and was condemned both there and by Jewish leaders in Palestine.", "title": "Revolt" }, { "paragraph_id": 89, "text": "This episode has been given as a major influence on the British decision to terminate the Mandate and leave Palestine. The United Nations Special Committee on Palestine (UNSCOP) was also influenced by this and other actions. At the same time another incident was developing – the events of the ship Exodus 1947. The 4,500 Holocaust survivors on board were not allowed to enter Palestine. UNSCOP also covered the events. Some of its members were even present at Haifa port when the putative immigrants were forcefully removed from their ship (later found to have been rigged with an IED by some of its passengers) onto the deportation ships, and later commented that this strong image helped them press for an immediate solution for Jewish immigration and the question of Palestine.", "title": "Revolt" }, { "paragraph_id": 90, "text": "Two weeks later, the House of Commons convened for a special debate on events in Palestine, and concluded that their soldiers should be withdrawn as soon as possible.", "title": "Revolt" }, { "paragraph_id": 91, "text": "UNSCOP's conclusion was a unanimous decision to end the British mandate, and a majority decision to divide Mandatory Palestine (the land west of the Jordan River) between a Jewish state and an Arab state. During the UN's deliberations regarding the committee's recommendations the Irgun avoided initiating any attacks, so as not to influence the UN negatively on the idea of a Jewish state. On November 29 the UN General Assembly voted in favor of ending the mandate and establishing two states on the land. That very same day the Irgun and the Lehi renewed their attacks on British targets. The next day the local Arabs began attacking the Jewish community, thus beginning the first stage of the 1948 Palestine War. The first attacks on Jews were in Jewish neighborhoods of Jerusalem, in and around Jaffa, and in Bat Yam, Holon, and the Ha'Tikvah neighborhood in Tel Aviv.", "title": "1948 Palestine War" }, { "paragraph_id": 92, "text": "In the autumn of 1947, the Irgun had approximately 4,000 members. The goal of the organization at that point was the conquest of the land between the Jordan River and the Mediterranean Sea for the future Jewish state and preventing Arab forces from driving out the Jewish community. The Irgun became almost an overt organization, establishing military bases in Ramat Gan and Petah Tikva. It began recruiting openly, thus significantly increasing in size. During the war the Irgun fought alongside the Lehi and the Haganah in the front against the Arab attacks. At first the Haganah maintained a defensive policy, as it had until then, but after the Convoy of 35 incident it completely abandoned its policy of restraint: \"Distinguishing between individuals is no longer possible, for now – it is a war, and even the innocent shall not be absolved.\"", "title": "1948 Palestine War" }, { "paragraph_id": 93, "text": "The Irgun also began carrying out reprisal missions, as it had under David Raziel's command. At the same time though, it published announcements calling on the Arabs to lay down their weapons and maintain a ceasefire:", "title": "1948 Palestine War" }, { "paragraph_id": 94, "text": "However, the mutual attacks continued. The Irgun attacked the Arab villages of Tira near Haifa, Yehudiya ('Abassiya) in the center, and Shuafat by Jerusalem. The Irgun also attacked in the Wadi Rushmiya neighborhood in Haifa and Abu Kabir in Jaffa. On December 29 Irgun units arrived by boat to the Jaffa shore and a gunfight between them and Arab gangs ensued. The following day a bomb was thrown from a speeding Irgun car at a group of Arab men waiting to be hired for the day at the Haifa oil refinery, resulting in seven Arabs killed, and dozens injured. In response, some Arab workers attacked Jews in the area, killing 41. This sparked a Haganah response in Balad al-Sheykh, which resulted in the deaths of 60 civilians. The Irgun's goal in the fighting was to move the battles from Jewish populated areas to Arab populated areas. On January 1, 1948, the Irgun attacked again in Jaffa, its men wearing British uniforms; later in the month it attacked in Beit Nabala, a base for many Arab fighters. On 5 January 1948 the Irgun detonated a lorry bomb outside Jaffa's Ottoman built Town Hall, killing 14 and injuring 19. In Jerusalem, two days later, Irgun members in a stolen police van rolled a barrel bomb into a large group of civilians who were waiting for a bus by the Jaffa Gate, killing around sixteen. In the pursuit that followed three of the attackers were killed and two taken prisoner.", "title": "1948 Palestine War" }, { "paragraph_id": 95, "text": "On 6 April 1948, the Irgun raided the British Army camp at Pardes Hanna killing six British soldiers and their commanding officer.", "title": "1948 Palestine War" }, { "paragraph_id": 96, "text": "The Deir Yassin massacre was carried out in a village west of Jerusalem that had signed a non-belligerency pact with its Jewish neighbors and the Haganah, and repeatedly had barred entry to foreign irregulars. On 9 April approximately 120 Irgun and Lehi members began an operation to capture the village. During the operation, the villagers fiercely resisted the attack, and a battle broke out. In the end, the Irgun and Lehi forces advanced gradually through house-to-house fighting. The village was only taken after the Irgun began systematically dynamiting houses, and after a Palmach unit intervened and employed mortar fire to silence the villagers' sniper positions. The operation resulted in five Jewish fighters dead and 40 injured. Some 100 to 120 villagers were also killed.", "title": "1948 Palestine War" }, { "paragraph_id": 97, "text": "There are allegations that Irgun and Lehi forces committed war crimes during and after the capture of the village. These allegations include reports that fleeing individuals and families were fired at, and prisoners of war were killed after their capture. A Haganah report writes:", "title": "1948 Palestine War" }, { "paragraph_id": 98, "text": "Some say that this incident was an event that accelerated the Arab exodus from Palestine.", "title": "1948 Palestine War" }, { "paragraph_id": 99, "text": "The Irgun cooperated with the Haganah in the conquest of Haifa. At the regional commander's request, on April 21 the Irgun took over an Arab post above Hadar Ha'Carmel as well as the Arab neighborhood of Wadi Nisnas, adjacent to the Lower City.", "title": "1948 Palestine War" }, { "paragraph_id": 100, "text": "The Irgun acted independently in the conquest of Jaffa (part of the proposed Arab State according to the UN Partition Plan). On April 25 Irgun units, about 600 strong, left the Irgun base in Ramat Gan towards Arab Jaffa. Difficult battles ensued, and the Irgun faced resistance from the Arabs as well as the British. Under the command of Amichai \"Gidi\" Paglin, the Irgun's chief operations officer, the Irgun captured the neighborhood of Manshiya, which threatened the city of Tel Aviv. Afterwards the force continued to the sea, towards the area of the port, and using mortars, shelled the southern neighborhoods.", "title": "1948 Palestine War" }, { "paragraph_id": 101, "text": "In his report concerning the fall of Jaffa the local Arab military commander, Michel Issa, wrote: \"Continuous shelling with mortars of the city by Jews for four days, beginning 25 April, [...] caused inhabitants of city, unaccustomed to such bombardment, to panic and flee.\" According to Morris the shelling was done by the Irgun. Their objective was \"to prevent constant military traffic in the city, to break the spirit of the enemy troops [and] to cause chaos among the civilian population in order to create a mass flight.\" High Commissioner Cunningham wrote a few days later \"It should be made clear that IZL attack with mortars was indiscriminate and designed to create panic among the civilian inhabitants.\" The British demanded the evacuation of the newly conquered city, and militarily intervened, ending the Irgun offensive. Heavy British shelling against Irgun positions in Jaffa failed to dislodge them, and when British armor pushed into the city, the Irgun resisted; a bazooka team managed to knock out one tank, buildings were blown up and collapsed onto the streets as the armor advanced, and Irgun men crawled up and tossed live dynamite sticks onto the tanks. The British withdrew, and opened negotiations with the Jewish authorities. An agreement was worked out, under which Operation Hametz would be stopped and the Haganah would not attack Jaffa until the end of the Mandate. The Irgun would evacuate Manshiya, with Haganah fighters replacing them. British troops would patrol its southern end and occupy the police fort. The Irgun had previously agreed with the Haganah that British pressure would not lead to withdrawal from Jaffa and that custody of captured areas would be turned over to the Haganah. The city ultimately fell on May 13 after Haganah forces entered the city and took control of the rest of the city, from the south – part of the Hametz Operation which included the conquest of a number of villages in the area. The battles in Jaffa were a great victory for the Irgun. This operation was the largest in the history of the organization, which took place in a highly built up area that had many militants in shooting positions. During the battles explosives were used in order to break into homes and continue forging a way through them. Furthermore, this was the first occasion in which the Irgun had directly fought British forces, reinforced with armor and heavy weaponry. The city began these battles with an Arab population estimated at 70,000, which shrank to some 4,100 Arab residents by the end of major hostilities. Since the Irgun captured the neighborhood of Manshiya on its own, causing the flight of many of Jaffa's residents, the Irgun took credit for the conquest of Jaffa. It had lost 42 dead and about 400 wounded during the battle.", "title": "1948 Palestine War" }, { "paragraph_id": 102, "text": "On May 14, 1948 the establishment of the State of Israel was proclaimed. The declaration of independence was followed by the establishment of the Israel Defense Forces (IDF), and the process of absorbing all military organizations into the IDF started. On June 1, an agreement had been signed between Menachem Begin and Yisrael Galili for the absorption of the Irgun into the IDF. One of the clauses stated that the Irgun had to stop smuggling arms. Meanwhile, in France, Irgun representatives purchased a ship, renamed Altalena (a pseudonym of Ze'ev Jabotinsky), and weapons. The ship sailed on June 11 and arrived at the Israeli coast on June 20, during the first truce of the 1948 Arab–Israeli War. Despite United Nations Security Council Resolution 50 declaring an arms embargo in the region, neither side respected it.", "title": "Integration with the IDF and the Altalena Affair" }, { "paragraph_id": 103, "text": "When the ship arrived the Israeli government, headed by Ben-Gurion, was adamant in its demand that the Irgun surrender and hand over all of the weapons. Ben-Gurion said: \"We must decide whether to hand over power to Begin or to order him to cease his activities. If he does not do so, we will open fire! Otherwise, we must decide to disperse our own army.\"", "title": "Integration with the IDF and the Altalena Affair" }, { "paragraph_id": 104, "text": "There were two confrontations between the newly formed IDF and the Irgun: when Altalena reached Kfar Vitkin in the late afternoon of Sunday, June 20 many Irgun militants, including Begin, waited on the shore. A clash with the Alexandroni Brigade, commanded by Dan Even (Epstein), occurred. Fighting ensued and there were a number of casualties on both sides. The clash ended in a ceasefire and the transfer of the weapons on shore to the local IDF commander, and with the ship, now reinforced with local Irgun members, including Begin, sailing to Tel Aviv, where the Irgun had more supporters. Many Irgun members, who joined the IDF earlier that month, left their bases and concentrated on the Tel Aviv beach. A confrontation between them and the IDF units started. In response, Ben-Gurion ordered Yigael Yadin (acting Chief of Staff) to concentrate large forces on the Tel Aviv beach and to take the ship by force. Heavy guns were transferred to the area and at four in the afternoon, Ben-Gurion ordered the shelling of the Altalena. One of the shells hit the ship, which began to burn. Sixteen Irgun fighters were killed in the confrontation with the army; six were killed in the Kfar Vitkin area and ten on Tel Aviv beach. Three IDF soldiers were killed: two at Kfar Vitkin and one in Tel Aviv.", "title": "Integration with the IDF and the Altalena Affair" }, { "paragraph_id": 105, "text": "After the shelling of the Altalena, more than 200 Irgun fighters were arrested. Most of them were freed several weeks later. The Irgun militants were then fully integrated with the IDF and not kept in separate units.", "title": "Integration with the IDF and the Altalena Affair" }, { "paragraph_id": 106, "text": "The initial agreement for the integration of the Irgun into the IDF did not include Jerusalem, where a small remnant of the Irgun called the Jerusalem Battalion, numbering around 400 fighters, and Lehi, continued to operate independently of the government. Following the assassination of UN Envoy for Peace Folke Bernadotte by Lehi in September 1948, the Israeli government determined to immediately dismantle the underground organizations. An ultimatum was issued to the Irgun to liquidate as an independent organization and integrate into the IDF or be destroyed, and Israeli troops surrounded the Irgun camp in the Katamon Quarter of Jerusalem. The Irgun accepted the ultimatum on September 22, 1948, and shortly afterward the remaining Irgun fighters in Jerusalem began enlisting in the IDF and turning over their arms. At Begin's orders, the Irgun in the diaspora formally disbanded on January 12, 1949, with the Irgun's former Paris headquarters becoming the European bureau of the Herut movement.", "title": "Integration with the IDF and the Altalena Affair" }, { "paragraph_id": 107, "text": "In order to increase the popularity of the Irgun organization and ideology, Irgun employed propaganda. This propaganda was mainly aimed at the British, and included the idea of Eretz Israel. According to Irgun propaganda posters, the Jewish state was not only to encompass all of Mandatory Palestine, but also The Emirate of Transjordan.", "title": "Propaganda" }, { "paragraph_id": 108, "text": "When the Labour party came into power in Britain in July 1945, Irgun published an announcement entitled, \"We shall give the Labour Government a Chance to Keep Its Word.\" In this publication, Irgun stated, \"Before it came to power, this Party undertook to return the Land of Israel to the people of Israel as a free state... Men and parties in opposition or in their struggle with their rivals, have, for twenty-five years, made us many promises and undertaken clear obligations; but, on coming to power, they have gone back on their words.\" Another publication, which followed a British counter-offensive against Jewish organizations in Palestine, Irgun released a document titled, \"Mobilize the Nation!\" Irgun used this publication to paint the British regime as hostile to the Jewish people, even comparing the British to the Nazis. In response to what was seen as British aggression, Irgun called for a Hebrew Provisional Government, and a Hebrew Liberation Army.", "title": "Propaganda" }, { "paragraph_id": 109, "text": "References to the Irgun as a terrorist organization came from sources including the Anglo-American Committee of Inquiry, newspapers and a number of prominent world and Jewish figures. Leaders within the mainstream Jewish organizations, the Jewish Agency, Haganah and Histadrut, as well as the British authorities, routinely condemned Irgun operations as terrorism and branded it an illegal organization as a result of the group's attacks on civilian targets. However, privately at least the Haganah kept a dialogue with the dissident groups. Ironically, in early 1947, \"the British army in Mandate Palestine banned the use of the term 'terrorist' to refer to the Irgun zvai Leumi ... because it implied that British forces had reason to be terrified.\"", "title": "Criticism" }, { "paragraph_id": 110, "text": "Irgun attacks prompted a formal declaration from the World Zionist Congress in 1946, which strongly condemned \"the shedding of innocent blood as a means of political warfare.\"", "title": "Criticism" }, { "paragraph_id": 111, "text": "The Israeli government, in September 1948, acting in response to the assassination of Count Folke Bernadotte, outlawed the Irgun and Lehi groups, declaring them terrorist organizations under the Prevention of Terrorism Ordinance.", "title": "Criticism" }, { "paragraph_id": 112, "text": "In 1948, The New York Times published a letter signed by a number of prominent Jewish figures including Hannah Arendt, Albert Einstein, Sidney Hook, and Rabbi Jessurun Cardozo, which described Irgun as \"a terrorist, right-wing, chauvinist organization in Palestine\". The letter went on to state that Irgun and the Stern gang \"inaugurated a reign of terror in the Palestine Jewish community. Teachers were beaten up for speaking against them, adults were shot for not letting their children join them. By gangster methods, beatings, window-smashing, and widespread robberies, the terrorists intimidated the population and exacted a heavy tribute.\"", "title": "Criticism" }, { "paragraph_id": 113, "text": "Soon after World War II, Winston Churchill said \"we should never have stopped immigration before the war\", but that the Irgun were \"the vilest gangsters\" and that he would \"never forgive the Irgun terrorists.\"", "title": "Criticism" }, { "paragraph_id": 114, "text": "In 2006, Simon McDonald, the British ambassador in Tel Aviv, and John Jenkins, the Consul-General in Jerusalem, wrote in response to a pro-Irgun commemoration of the King David Hotel bombing: \"We do not think that it is right for an act of terrorism, which led to the loss of many lives, to be commemorated.\" They also called for the removal of plaques at the site which presented as a fact that the deaths were due to the British ignoring warning calls. The plaques, in their original version, read:", "title": "Criticism" }, { "paragraph_id": 115, "text": "Warning phone calls had been made urging the hotel's occupants to leave immediately. For reasons known only to the British the hotel was not evacuated and after 25 minutes the bombs exploded, and to the Irgun's regret and dismay 91 persons were killed.", "title": "Criticism" }, { "paragraph_id": 116, "text": "McDonald and Jenkins said that no such warning calls were made, adding that even if they had, \"this does not absolve those who planted the bomb from responsibility for the deaths.\"", "title": "Criticism" }, { "paragraph_id": 117, "text": "Bruce Hoffman states: \"Unlike many terrorist groups today, the Irgun's strategy was not deliberately to target or wantonly harm civilians.\" Max Abrahms writes that the Irgun \"pioneered the practice of issuing pre-attack warnings to spare civilians\", which was later emulated by the African National Congress (ANC) and other groups and proved \"effective but not foolproof\". In addition, Begin ordered attacks to take place at night and even during Shabbat to reduce the likelihood of civilian casualties. U.S. military intelligence found that \"the Irgun Zvai Leumi is waging a general war against the government and at all times took special care not to cause damage or injury to persons\". Although the King David Hotel bombing is widely considered a prima facie case of Irgun terrorism, Abrahms comments: \"But this hotel wasn't a normal hotel. It served as the headquarters for the British Armed Forces in Palestine. By all accounts, the intent wasn't to harm civilians.\"", "title": "Criticism" }, { "paragraph_id": 118, "text": "Ha'aretz columnist and Israeli historian Tom Segev wrote of the Irgun: \"In the second half of 1940, a few members of the Irgun Zvai Leumi (National Military Organization) – the anti-British terrorist group sponsored by the Revisionists and known by its acronym Etzel, and to the British simply as the Irgun – made contact with representatives of Fascist Italy, offering to cooperate against the British.\"", "title": "Criticism" }, { "paragraph_id": 119, "text": "Clare Hollingworth, the Daily Telegraph and The Scotsman correspondent in Jerusalem during 1948 wrote several outspoken reports after spending several weeks in West Jerusalem:", "title": "Criticism" }, { "paragraph_id": 120, "text": "'The shopkeepers are afraid not so much of shells as of raids by Irgun Zvai Leumi and the Stern Gang. These young toughs, who are beyond whatever law there is have cleaned out most private houses of the richer classes & started to prey upon the shopkeepers.'", "title": "Criticism" }, { "paragraph_id": 121, "text": "A US military intelligence report, dated January 1948, described Irgun recruiting tactics amongst Displaced Persons (DP) in the camps across Germany:", "title": "Criticism" }, { "paragraph_id": 122, "text": "Alan Dershowitz wrote in his book The Case for Israel that unlike the Haganah, the policy of the Irgun had been to encourage the flight of local Arabs.", "title": "Criticism" } ]
The Irgun, or Etzel, was a Zionist paramilitary organization that operated in Mandate Palestine and then Israel between 1931 and 1948. It was an offshoot of the older and larger Jewish paramilitary organization Haganah. The Irgun has been viewed as a terrorist organization or organization which carried out terrorist acts. The Irgun policy was based on what was then called Revisionist Zionism founded by Ze'ev Jabotinsky. According to Howard Sachar, "The policy of the new organization was based squarely on Jabotinsky's teachings: every Jew had the right to enter Palestine; only active retaliation would deter the Arabs; only Jewish armed force would ensure the Jewish state". Two of the operations for which the Irgun is best known are the bombing of the King David Hotel in Jerusalem on 22 July 1946 and the Deir Yassin massacre that killed at least 107 Palestinian Arab villagers, including women and children, carried out together with Lehi on 9 April 1948. The organization committed acts of terrorism against the British, whom it regarded as illegal occupiers, and against Arabs. In particular the Irgun was described as a terrorist organization by the United Nations, British, and United States governments; in media such as The New York Times newspaper; as well as by the Anglo-American Committee of Inquiry, the 1946 Zionist Congress and the Jewish Agency. However, academics such as Bruce Hoffman and Max Abrahms have written that the Irgun went to considerable lengths to avoid harming civilians, such as issuing pre-attack warnings; according to Hoffman, Irgun leadership urged "targeting the physical manifestations of British rule while avoiding the deliberate infliction of bloodshed." Albert Einstein, in a letter to The New York Times in 1948, compared Irgun and its successor Herut party to "Nazi and Fascist parties" and described it as a "terrorist, right wing, chauvinist organization". Irgun's tactics appealed to many Jews who believed that any action taken in the cause of the creation of a Jewish state was justified, including terrorism. Irgun members were absorbed into the Israel Defense Forces at the start of the 1948 Arab–Israeli war. The Irgun was a political predecessor to Israel's right-wing Herut party, which led to today's Likud party. Likud has led or been part of most Israeli governments since 1977.
2001-12-17T21:11:53Z
2023-12-31T20:36:51Z
[ "Template:When", "Template:More citations needed section", "Template:Cite book", "Template:In lang", "Template:Commons category", "Template:Webarchive", "Template:Lang-he", "Template:Clarify", "Template:Reflist", "Template:ISBN", "Template:Pp-30-500", "Template:Blockquote", "Template:Cite news", "Template:Zionism", "Template:Main", "Template:Citation needed", "Template:Transl", "Template:By whom", "Template:Cite web", "Template:Cite magazine", "Template:Authority control", "Template:Short description", "Template:Infobox military unit" ]
https://en.wikipedia.org/wiki/Irgun
15,408
Isoroku Yamamoto
Isoroku Yamamoto (山本 五十六, Yamamoto Isoroku, April 4, 1884 – April 18, 1943) was a Marshal Admiral of the Imperial Japanese Navy (IJN) and the commander-in-chief of the Combined Fleet during World War II. Yamamoto held several important posts in the Imperial Navy, and undertook many of its changes and reorganizations, especially its development of naval aviation. He was the commander-in-chief during the early years of the Pacific War and oversaw major engagements including the attack on Pearl Harbor and the Battle of Midway. Yamamoto was killed in April 1943 after American code breakers identified his flight plans, enabling the United States Army Air Forces to shoot down his plane. His death was a major blow to Japanese military morale during World War II. Yamamoto was born Isoroku Takano (高野 五十六, Takano Isoroku) in Nagaoka, Niigata. His father, Sadayoshi Takano (高野 貞吉), was an intermediate-rank samurai of the Nagaoka Domain. "Isoroku" is a Japanese term meaning "56"; the name referred to his father's age at Isoroku's birth. In 1916, Isoroku was adopted into the Yamamoto family (another family of former Nagaoka samurai) and took the Yamamoto name. It was a common practice for samurai families lacking sons to adopt suitable young men in this fashion to carry on the family name, the rank and the income that went with it. Isoroku married Reiko Mihashi in 1918; they had two sons and two daughters. Yamamoto graduated from the Imperial Japanese Naval Academy in 1904, ranking 11th in his class. He then subsequently served on the armored cruiser Nisshin during the Russo-Japanese War. He was wounded at the Battle of Tsushima, losing his index and middle fingers on his left hand, as the cruiser was hit repeatedly by the Russian battle line. He returned to the Naval Staff College in 1914, emerging as a lieutenant commander in 1916. In December 1919, he was promoted to commander. Yamamoto was part of the Japanese Navy establishment, who were rivals of the more aggressive Army establishment, especially the officers of the Kwantung Army. He promoted a policy of a strong fleet to project force through gunboat diplomacy, rather than a fleet used primarily for the transport of invasion land forces, as some of his political opponents in the Army wanted. This stance led him to oppose the invasion of China. He also opposed war against the United States, partly because of his studies at Harvard University (1919–1921) and his two postings as a naval attaché in Washington, D.C., where he learned to speak fluent English. Yamamoto traveled extensively in the United States during his tour of duty there, where he studied American customs and business practices. He was promoted to captain in 1923. On February 13, 1924, Captain Yamamoto was part of the Japanese delegation visiting the United States Naval War College. Later that year, he changed his specialty from gunnery to naval aviation. His first command was the cruiser Isuzu in 1928, followed by the aircraft carrier Akagi. He participated in the London Naval Conference 1930 as a rear admiral and the London Naval Conference 1935 as a vice admiral, as the growing military influence on the government at the time deemed that a career military specialist needed to accompany the diplomats to the arms limitations talks. Yamamoto was a strong proponent of naval aviation and served as head of the Aeronautics Department, before accepting a post as commander of the First Carrier Division. Yamamoto opposed the Japanese invasion of northeast China in 1931, the subsequent full-scale land war with China in 1937, and the Tripartite Pact with Nazi Germany and Fascist Italy in 1940. As Deputy Navy Minister, he apologized to United States Ambassador Joseph C. Grew for the bombing of the gunboat USS Panay in December 1937. These issues made him a target of assassination threats by pro-war militarists. Throughout 1938, many young army and naval officers began to speak publicly against Yamamoto and certain other Japanese admirals, such as Mitsumasa Yonai and Shigeyoshi Inoue, for their strong opposition to a tripartite pact with Nazi Germany and Fascist Italy, which the admirals saw as inimical to "Japan's natural interests". Yamamoto received a steady stream of hate mail and death threats from Japanese nationalists. His reaction to the prospect of death by assassination was passive and accepting. The admiral wrote: To die for Emperor and Nation is the highest hope of a military man. After a brave hard fight the blossoms are scattered on the fighting field. But if a person wants to take a life instead, still the fighting man will go to eternity for Emperor and country. One man's life or death is a matter of no importance. All that matters is the Empire. As Confucius said, "They may crush cinnabar, yet they do not take away its color; one may burn a fragrant herb, yet it will not destroy the scent." They may destroy my body, yet they will not take away my will. The Japanese Army, annoyed at Yamamoto's unflinching opposition to a Rome-Berlin-Tokyo treaty, dispatched military police to "guard" him, a ruse by the Army to keep an eye on him. He was later reassigned from the naval ministry to sea as the commander-in-chief of the Combined Fleet on August 30, 1939. This was done as one of the last acts of acting Navy Minister Mitsumasa Yonai, under Baron Hiranuma Kiichirō's short-lived administration. It was done partly to make it harder for assassins to target Yamamoto. Yonai was certain that if Yamamoto remained ashore, he would be killed before the year [1939] ended. Yamamoto was promoted to admiral on November 15, 1940. This was in spite of the fact that when Hideki Tojo was appointed Prime Minister on October 18, 1941, many political observers thought that Yamamoto's career was essentially over. Tojo had been Yamamoto's old opponent from the time when the latter served as Japan's deputy naval minister and Tojo was the prime mover behind Japan's takeover of Manchuria. It was believed that Yamamoto would be appointed to command the Yokosuka Naval Base, "a nice safe demotion with a big house and no power at all". However, after a brief stint in the post, a new Japanese cabinet was announced, and Yamamoto found himself returned to his position of power despite his open conflict with Tojo and other members of the Army's oligarchy who favored war with the European powers and the United States. Two of the main reasons for Yamamoto's political survival were his immense popularity within the fleet, where he commanded the respect of his men and officers, and his close relations with the imperial family. He also had the acceptance of Japan's naval hierarchy: There was no officer more competent to lead the Combined Fleet to victory than Admiral Yamamoto. His daring plan for the Pearl Harbor attack had passed through the crucible of the Japanese naval establishment, and after many expressed misgivings, his fellow admirals had realized that Yamamoto spoke no more than the truth when he said that Japan's hope for victory in this [upcoming] war was limited by time and oil. Every sensible officer of the navy was well aware of the perennial oil problems. Also, it had to be recognized that if the enemy could seriously disturb Japanese merchant shipping, then the fleet would be endangered even more. Consequently, Yamamoto stayed in his post. With Tojo now in charge of Japan's highest political office, it became clear the Army would lead the Navy into a war about which Yamamoto had serious reservations. He wrote to an ultranationalist: Should hostilities once break out between Japan and the United States, it would not be enough that we take Guam and the Philippines, nor even Hawaii and San Francisco. To make victory certain, we would have to march into Washington and dictate the terms of peace in the White House. I wonder if our politicians [who speak so lightly of a Japanese-American war] have confidence as to the final outcome and are prepared to make the necessary sacrifices. This quote was spread by the militarists, minus the last sentence, so it was interpreted in America as a boast that Japan would conquer the entire continental United States. The omitted sentence showed Yamamoto's counsel of caution towards a war that could cost Japan dearly. Nevertheless, Yamamoto accepted the reality of impending war and planned for a quick victory by destroying the United States Pacific Fleet at Pearl Harbor in a preventive strike, while simultaneously thrusting into the oil- and rubber-rich areas of Southeast Asia, especially the Dutch East Indies, Borneo, and Malaya. In naval matters, Yamamoto opposed the building of the super battleships Yamato and Musashi as an unwise investment of resources. Yamamoto was responsible for a number of innovations in Japanese naval aviation. Although remembered for his association with aircraft carriers, Yamamoto did more to influence the development of land-based naval aviation, particularly the Mitsubishi G3M and G4M medium bombers. His demand for great range and the ability to carry a torpedo was intended to conform to Japanese conceptions of bleeding the American fleet as it advanced across the Pacific. The planes did achieve long range, but long-range fighter escorts were not available. These planes were lightly constructed and when fully fueled, they were especially vulnerable to enemy fire. This earned the G4M the sardonic nickname the "flying cigarette lighter". Yamamoto would eventually die in one of these aircraft. The range of the G3M and G4M contributed to a demand for great range in a fighter aircraft. This partly drove the requirements for the A6M Zero, which was as noteworthy for its range as for its maneuverability. Both qualities were again purchased at the expense of light construction and flammability that later contributed to the A6M's high casualty rates as the war progressed. As Japan moved toward war during 1940, Yamamoto gradually moved toward strategic as well as tactical innovation, again with mixed results. Prompted by talented young officers such as Lieutenant Commander Minoru Genda, Yamamoto approved the reorganization of Japanese carrier forces into the First Air Fleet, a consolidated striking force that gathered Japan's six largest carriers into one unit. This innovation gave great striking capacity, but also concentrated the vulnerable carriers into a compact target. Yamamoto also oversaw the organization of a similar large land-based organization in the 11th Air Fleet, which would later use the G3M and G4M to neutralize American air forces in the Philippines and sink the British Force Z. In January 1941, Yamamoto went even further and proposed a radical revision of Japanese naval strategy. For two decades, in keeping with the doctrine of Captain Alfred T. Mahan, the Naval General Staff had planned in terms of Japanese light surface forces, submarines, and land-based air units whittling down the American fleet as it advanced across the Pacific until the Japanese Navy engaged it in a climactic Kantai Kessen ("decisive battle") in the northern Philippine Sea (between the Ryukyu Islands and the Marianas), with battleships fighting in traditional battle lines. Correctly pointing out this plan had never worked even in Japanese war games, and painfully aware of American strategic advantages in military production capacity, Yamamoto proposed instead to seek parity with the Americans by first reducing their forces with a preventive strike, then following up with a "decisive battle" fought offensively, rather than defensively. Yamamoto hoped, but probably did not believe, that if the Americans could be dealt terrific blows early in the war, they might be willing to negotiate an end to the conflict. The Naval General Staff proved reluctant to go along, and Yamamoto was eventually driven to capitalize on his popularity in the fleet by threatening to resign to get his way. Admiral Osami Nagano and the Naval General Staff eventually caved in to this pressure, but only insofar as approving the attack on Pearl Harbor. In January 1941 Yamamoto began developing a plan to attack the American base in Pearl Harbor, Hawaii. which the Japanese continued to refine during the next months. On November 5, 1941, Yamamoto in his "Top Secret Operation Order no. 1" issued to the Combined Fleet, the Empire of Japan must drive out Britain and America from Greater East Asia and hasten the settlement of China, whereas, in the event that Britain and America were driven out from the Philippines and Dutch East Indies, an independent, self-supporting economic entity will be firmly established—mirroring the principle of the Greater East Asia Co-Prosperity Sphere in another personification. Two days later, he set the date for the intended surprise attack in Pearl Harbor and that would be on December 7 for one simple reason: it was a Sunday, the day that American military personnel would be least alert to an attack. The First Air Fleet commenced preparations for the Pearl Harbor raid, solving a number of technical problems along the way, including how to launch torpedoes in the shallow waters of Pearl Harbor and how to craft armor-piercing bombs by machining down battleship gun projectiles. Although the United States and Japan were officially at peace, the First Air Fleet of six carriers attacked on December 7, 1941, launching 353 aircraft against Pearl Harbor and other locations within Honolulu in two waves. The attack was a success according to the parameters of the mission, which sought to sink at least four American battleships and prevent the United States from interfering in Japan's southward advance for at least six months. Three American aircraft carriers were also considered a choice target, but these were at sea at the time. In the end, four American battleships were sunk, four were damaged, and eleven other cruisers, destroyers, and auxiliaries were sunk or seriously damaged, 188 American aircraft were destroyed and 159 others damaged, and 2,403 people were killed and 1,178 others wounded. The Japanese lost 64 servicemen and only 29 aircraft, with 74 others damaged by anti-aircraft fire from the ground. The damaged aircraft were disproportionately dive and torpedo bombers, seriously reducing the ability to exploit the first two waves' success, so the commander of the First Air Fleet, Naval Vice Admiral Chuichi Nagumo, withdrew. Yamamoto later lamented Nagumo's failure to seize the initiative to seek out and destroy the American carriers or further bombard various strategically important facilities on Oahu. Nagumo had absolutely no idea where the American carriers were, and remaining on station while his forces looked for them ran the risk of his own forces being found first and attacked while his aircraft were absent searching. In any case, insufficient daylight remained after recovering the aircraft from the first two waves for the carriers to launch and recover a third before dark, and Nagumo's escorting destroyers lacked the fuel capacity to loiter long. Much has been made of Yamamoto's hindsight, but in keeping with Japanese military tradition to not criticize the commander on the spot, he did not punish Nagumo for his withdrawal. On the strategic, moral, and political level, the attack was a disaster for Japan, rousing Americans' thirst for revenge due to what is famously called a "sneak attack". The shock of the attack, coming in an unexpected place with devastating results and without a declaration of war, galvanized the American public's determination to avenge the attack. When asked by Prime Minister Fumimaro Konoe in mid-1941 about the outcome of a possible war with the United States, Yamamoto made a well-known and prophetic statement: If ordered to fight, he said, "I shall run wild considerably for the first six months or a year, but I have utterly no confidence for the second and third years." His prediction would be validated, as Japan easily conquered territories and islands in Asia and the Pacific for the first six months of the war, before suffering a major defeat at the Battle of Midway on June 4–7, 1942, which ultimately tilted the balance of power in the Pacific toward the United States. With the American fleet largely neutralized at Pearl Harbor, Yamamoto's Combined Fleet turned to the task of executing the larger Japanese war plan devised by the Imperial Japanese Army and Navy General Staff. The First Air Fleet made a circuit of the Pacific, striking American, Australian, Dutch, and British installations from Wake Island to Australia to Ceylon in the Indian Ocean. The 11th Air Fleet caught the United States Fifth Air Force on the ground in the Philippines hours after Pearl Harbor, and then sank the British Force Z's battleship HMS Prince of Wales and battlecruiser HMS Repulse at sea. Under Yamamoto's able subordinates, Vice Admirals Jisaburō Ozawa, Nobutake Kondō, and Ibō Takahashi, the Japanese swept the inadequate remaining American, British, Dutch and Australian naval assets from the Dutch East Indies in a series of amphibious landings and surface naval battles culminating in the Battle of the Java Sea on February 27, 1942. Along with the occupation of the Dutch East Indies came the fall of Singapore on February 15, and the eventual reduction of the remaining American-Filipino defensive positions in the Philippines on the Bataan peninsula on April 9 and Corregidor Island on May 6. The Japanese had secured their oil- and rubber-rich "southern resources area". By late March, having achieved their initial aims with surprising speed and little loss, albeit against enemies ill-prepared to resist them, the Japanese paused to consider their next moves. Yamamoto and a few Japanese military leaders and officials waited, hoping that the United States or Great Britain would negotiate an armistice or a peace treaty to end the war. But when the British, as well as the Americans, expressed no interest in negotiating, Japanese thoughts turned to securing their newly seized territory and acquiring more with an eye to driving one or more of their enemies out of the war. Competing plans were developed at this stage, including thrusts to the west against British India, south against Australia, and east against the United States. Yamamoto was involved in this debate, supporting different plans at different times with varying degrees of enthusiasm and for varying purposes, including "horse-trading" for support of his own objectives. Plans included ideas as ambitious as invading India or Australia, or seizing Hawaii. These grandiose ventures were inevitably set aside, as the Army could not spare enough troops from China for the first two, which would require a minimum of 250,000 men, nor shipping to support the latter two (transports were allocated separately to the Navy and Army, and jealously guarded). Instead, the Imperial General Staff supported an army thrust into Burma in hopes of linking up with Indian nationalists revolting against British rule, and attacks in New Guinea and the Solomon Islands designed to imperil Australia's lines of communication with the United States. Yamamoto argued for a decisive offensive strike in the east to finish off the American fleet, but the more conservative Naval General Staff officers were unwilling to risk it. On April 18, in the midst of these debates, the Doolittle Raid struck Tokyo and surrounding areas, demonstrating the threat posed by American aircraft carriers, and giving Yamamoto an event he could exploit to get his way, and further debate over military strategy came to a quick end. The Naval General Staff agreed to Yamamoto's Midway Island (MI) Operation, subsequent to the first phase of the operations against Australia's link with America, and concurrent with its plan to invade the Aleutian Islands. Yamamoto rushed planning for the Midway and Aleutians missions, while dispatching a force under Vice Admiral Takeo Takagi, including the Fifth Carrier Division (the large new carriers Shōkaku and Zuikaku), to support the effort to seize the islands of Tulagi and Guadalcanal for seaplane and airplane bases, and the town of Port Moresby on Papua New Guinea's south coast facing Australia. The Port Moresby (MO) Operation proved an unwelcome setback. Although Tulagi and Guadalcanal were taken, the Port Moresby invasion fleet was compelled to turn back when Takagi clashed with an American carrier task force in the Battle of the Coral Sea in early May. Although the Japanese sank the carrier USS Lexington and damaged the USS Yorktown, the Americans damaged the carrier Shōkaku so badly that she required dockyard repairs, and the Japanese lost the light carrier Shoho. Just as importantly, Japanese operational mishaps and American fighters and anti-aircraft fire devastated the dive bomber and torpedo plane formations of both Shōkaku's and Zuikaku's air groups. These losses sidelined Zuikaku while she awaited replacement aircraft and aircrews, and saw to tactical integration and training. These two ships would be sorely missed a month later at Midway. Yamamoto's plan for Midway Island was an extension of his efforts to knock the American Pacific Fleet out of action long enough for Japan to fortify its defensive perimeter in the Pacific island chains. Yamamoto felt it necessary to seek an early, offensive decisive battle. This plan was long believed to have been to draw American attention—and possibly carrier forces—north from Pearl Harbor by sending his Fifth Fleet (one carrier, one light carrier, four battleships, eight cruisers, 25 destroyers, and four transports) against the Aleutians, raiding Dutch Harbor on Unalaska Island and invading the more distant islands of Kiska and Attu. While Fifth Fleet attacked the Aleutians, First Mobile Force (four carriers, two battleships, three cruisers, and 12 destroyers) would attack Midway and destroy its air force. Once this was neutralized, Second Fleet (one light carrier, two battleships, 10 cruisers, 21 destroyers, and 11 transports) would land 5,000 troops to seize the atoll from the United States Marines. The seizure of Midway was expected to draw the American carriers west into a trap where the First Mobile Force would engage and destroy them. Afterwards, First Fleet (one light carrier, three battleships, one light cruiser and nine destroyers), in conjunction with elements of Second Fleet, would mop up remaining US surface forces and complete the destruction of the American Pacific Fleet. To guard against failure, Yamamoto initiated two security measures. The first was an aerial reconnaissance mission (Operation K) over Pearl Harbor to ascertain if the American carriers were there. The second was a picket line of submarines to detect the movement of enemy carriers toward Midway in time for First Mobile Force, First Fleet, and Second Fleet to combine against it. In the event, the first measure was aborted and the second delayed until after the American carriers had already sortied. The plan was a compromise and hastily prepared, apparently so it could be launched in time for the anniversary of the Battle of Tsushima, but appeared well thought out, well organized, and finely timed when viewed from a Japanese viewpoint. Against four fleet carriers, two light carriers, seven battleships, 14 cruisers and 42 destroyers likely to be in the area of the main battle, the United States could field only three carriers, eight cruisers, and 15 destroyers. The disparity appeared crushing. Only in numbers of carrier decks, available aircraft, and submarines was there near parity between the two sides. Despite various mishaps developed in the execution, it appeared that—barring something unforeseen—Yamamoto held all the cards. Unknown to Yamamoto, the Americans had learned of Japanese plans thanks to the code breaking of Japanese naval code D (known to the US as JN-25). As a result, Admiral Chester Nimitz, the Pacific Fleet commander, was able to place his outnumbered forces in a position to conduct their own ambush. By Nimitz's calculation, his three available carrier decks, plus Midway, gave him rough parity with Nagumo's First Mobile Force. Following a nuisance raid by Japanese flying boats in May, Nimitz dispatched a minesweeper to guard the intended refueling point for Operation K near French Frigate Shoals, causing the reconnaissance mission to be aborted and leaving Yamamoto ignorant of whether the Pacific Fleet carriers were still at Pearl Harbor. It remains unclear why Yamamoto permitted the earlier attack, and why his submarines did not sortie sooner, as reconnaissance was essential to success at Midway. Nimitz also dispatched his carriers toward Midway early, and they passed the Japanese submarines en route to their picket line positions. Nimitz's carriers positioned themselves to ambush the Kidō Butai (striking force) when it struck Midway. A token cruiser and destroyer force was sent toward the Aleutians, but otherwise Nimitz ignored them. On June 4, 1942, days before Yamamoto expected them to interfere in the Midway operation, American carrier-based aircraft destroyed the four carriers of the Kidō Butai, catching the Japanese carriers at especially vulnerable times. With his air power destroyed and his forces not yet concentrated for a fleet battle, Yamamoto maneuvered his remaining forces, still strong on paper, to trap the American forces. He was unable to do so because his initial dispositions had placed his surface combatants too far from Midway, and because Admiral Raymond Spruance prudently withdrew to the east to further defend Midway Island, believing (based on a mistaken submarine report) the Japanese still intended to invade. Not knowing several battleships, including the powerful Yamato, were in the Japanese order of battle, he did not comprehend the severe risk of a night surface battle, in which his carriers and cruisers would be at a disadvantage. However, his move to the east avoided that possibility. Correctly perceiving he had lost and could not bring surface forces into action, Yamamoto withdrew. The defeat marked the high tide of Japanese expansion. Yamamoto's plan has been the subject of much criticism. Some historians state it violated the principle of concentration of force and was overly complex. Others point to similarly complex Allied operations, such as Operation MB8, that were successful, and note the extent to which the American intelligence coup derailed the operation before it began. Had Yamamoto's dispositions not denied Nagumo adequate pre-attack reconnaissance assets, both the American cryptanalytic success and the unexpected appearance of the American carriers would have been irrelevant. The Battle of Midway checked Japanese momentum, but the Japanese Navy was still a powerful force, capable of regaining the initiative. It planned to resume the thrust with Operation FS, aimed at eventually taking Fiji and Samoa to cut the American lifeline to Australia. Yamamoto remained as commander-in-chief, retained at least partly to avoid diminishing the morale of the Combined Fleet. However, he had lost face as a result of the Midway defeat, and the Naval General Staff were disinclined to indulge in further gambles. This reduced Yamamoto to pursuing the classic defensive "decisive battle strategy" he had attempted to avoid. Yamamoto committed Combined Fleet units to a series of small attrition actions across the south and central Pacific that stung the Americans, but in return suffered losses he could ill afford. Three major efforts to beat the Americans moving on Guadalcanal precipitated a pair of carrier battles that Yamamoto commanded personally: the Battles of the Eastern Solomons and Santa Cruz Islands in September and October, respectively, and finally a pair of wild surface engagements in November, all timed to coincide with Japanese Army pushes. The effort was wasted when the Army could not hold up its end of the operation. Yamamoto's naval forces won a few victories and inflicted considerable losses and damage to the American fleet in several battles around Guadalcanal which included the Battles of Savo Island, Cape Esperance, and Tassafaronga, but he could never draw the United States into a decisive fleet action. As a result, Japanese naval strength declined. To boost morale following the defeat at Guadalcanal, Yamamoto decided to make an inspection tour throughout the South Pacific. It was during this tour that U.S. officials commenced an operation to kill him. On April 14, 1943, the United States naval intelligence effort, codenamed "Magic", intercepted and decrypted a message containing specifics of Yamamoto's tour, including arrival and departure times and locations, as well as the number and types of aircraft that would transport and accompany him on the journey. Yamamoto, the itinerary revealed, would be flying from Rabaul to Balalae Airfield, on an island near Bougainville in the Solomon Islands, on the morning of April 18, 1943. President Franklin D. Roosevelt may have authorized Secretary of the Navy Frank Knox to "get Yamamoto", but no official record of such an order exists, and sources disagree whether he did so. Knox essentially let Admiral Chester W. Nimitz make the decision. Nimitz first consulted Admiral William Halsey Jr., Commander, South Pacific, and then authorized the mission on April 17 to intercept and shoot down Yamamoto's flight en route. A squadron of United States Army Air Forces Lockheed P-38 Lightning aircraft were assigned the task as only they possessed sufficient range. Select pilots from three units were informed that they were intercepting an "important high officer", with no specific name given. On the morning of April 18, despite urging by local commanders to cancel the trip for fear of ambush, Yamamoto's two Mitsubishi G4M bombers, used as fast transport aircraft without bombs, left Rabaul as scheduled for the 315 mi (507 km) trip. Sixteen P-38s intercepted the flight over Bougainville, and a dogfight ensued between them and the six escorting Mitsubishi A6M Zeroes. First Lieutenant Rex T. Barber engaged the first of the two Japanese transports, which turned out to be T1-323 (Yamamoto's aircraft). He fired on the aircraft until it began to spew smoke from its left engine. Barber turned away to attack the other transport as Yamamoto's plane crashed into the jungle. Yamamoto's body, along with the crash site, was found the next day in the jungle of the island of Bougainville by a Japanese search-and-rescue party, led by army engineer Lieutenant Tsuyoshi Hamasuna. According to Hamasuna, Yamamoto had been thrown clear of the plane's wreckage, his white-gloved hand grasping the hilt of his katana, still upright in his seat under a tree. Hamasuna said Yamamoto was instantly recognizable, head dipped down as if deep in thought. A post-mortem disclosed that Yamamoto had received two .50-caliber bullet wounds, one to the back of his left shoulder and another to the left side of his lower jaw that exited above his right eye. The Japanese navy doctor examining the body determined that the head wound had killed Yamamoto. The more violent details of Yamamoto's death were hidden from the Japanese public. The medical report was changed "on orders from above", according to biographer Hiroyuki Agawa. Yamamoto's staff cremated his remains at Buin, Papua New Guinea, and his ashes were returned to Tokyo aboard the battleship Musashi, his last flagship. He was given a full state funeral on June 5, 1943, where he received, posthumously, the title of Marshal Admiral and was awarded the Order of the Chrysanthemum (1st Class). He was also awarded Nazi Germany's Knight's Cross of the Iron Cross with Oak Leaves and Swords. Some of his ashes were buried in the public Tama Cemetery, Tokyo (多摩霊園) and the remainder at his ancestral burial grounds at the temple of Chuko-ji in Nagaoka City. He was succeeded as commander-in-chief of the Combined Fleet by Admiral Mineichi Koga. In the years following Admiral Yamamoto's death, debate has arisen regarding whether he was assassinated rather than legally killed. Colonel Hays Parks, one of the U.S. government's foremost legal experts, wrote in his "Memorandum of Law: Executive Order 12333 and Assassination" that Admiral Yamamoto was killed because of his status as an enemy combatant in compliance with the applicable laws of war. Parks wrote that "enemy combatants are legitimate targets at all times, regardless of their duties or activities at the time of their attack. Such attacks do not constitute assassination unless carried out in a 'treacherous' manner, as prohibited by article 23(b) of the Annex to the Hague Regulations (Hague Convention IV) of 1907." Yamamoto practiced calligraphy. He and his wife, Reiko, had four children: two sons and two daughters. Yamamoto was an avid gambler, enjoying Go, shogi, billiards, bridge, mahjong, poker, and other games that tested his wits and sharpened his mind. He frequently made jokes about moving to Monaco and starting his own casino. He enjoyed the company of geisha, and his wife Reiko revealed to the Japanese public in 1954 that Yamamoto was closer to his favorite geisha Kawai Chiyoko than to her, which stirred some controversy. His funeral procession passed by Kawai's quarters on the way to the cemetery. Yamamoto was close friends with Teikichi Hori, a Navy admiral and Yamamoto's classmate from the Imperial Japanese Naval Academy who was purged from the Navy for supporting the Washington Naval Treaty. Before and during the war Yamamoto frequently corresponded with Hori, these personal letters would become the subject of the NHK documentary The Truth of Yamamoto. The claim that Yamamoto was a Catholic is likely due to confusion with retired Admiral Shinjiro Stefano Yamamoto, who was a decade older than Isoroku, and died in 1942. Since the end of the Second World War, a number of Japanese and American films have depicted the character of Isoroku Yamamoto. One of the most notable films is the 1970 movie Tora! Tora! Tora!, which stars Japanese actor Sō Yamamura as Yamamoto, who states after the attack on Pearl Harbor: I fear that all we have done is to awaken a sleeping giant and fill him with a terrible resolve. The first film to feature Yamamoto was Toho's 1953 film Eagle of the Pacific, in which Yamamoto was portrayed by Denjirō Ōkōchi. The 1960 film The Gallant Hours depicts the battle of wits between Vice-Admiral William Halsey, Jr. and Yamamoto from the start of the Guadalcanal Campaign in August 1942 to Yamamoto's death in April 1943. The film, however, portrays Yamamoto's death as occurring in November 1942, the day after the Naval Battle of Guadalcanal, and the P-38 aircraft that killed him as coming from Guadalcanal. In Daiei Studios's 1969 film Aa, kaigun (later released in the United States as Gateway to Glory), Yamamoto was portrayed by Shōgo Shimada. Professional wrestler Harold Watanabe adopted the villainous Japanese gimmick of Tojo Yamamoto in reference to both Yamamoto and Hideki Tojo. Award-winning Japanese actor Toshiro Mifune (star of The Seven Samurai) portrayed Yamamoto in three films: A fictionalized version of Yamamoto's death was portrayed in the Baa Baa Black Sheep episode "The Hawk Flies on Sunday", though only photos of Yamamoto were shown. In this episode, set much later in the war than in real life, the Black Sheep, a Marine Corsair squadron, joins an army squadron of P-51 Mustangs. The Marines intercepted fighter cover while the army shot down Yamamoto. In Shūe Matsubayashi's 1981 film Rengō kantai (lit. "Combined Fleet", later released in the United States as The Imperial Navy), Yamamoto was portrayed by Keiju Kobayashi. In the 1993 OVA series Konpeki no Kantai (lit. Deep Blue Fleet), instead of dying in the plane crash, Yamamoto blacks out and suddenly wakes up as his younger self, Isoroku Takano, after the Battle of Tsushima in 1905. His memory from the original timeline intact, Yamamoto uses his knowledge of the future to help Japan become a stronger military power, eventually launching a coup d'état against Hideki Tōjō's government. In the subsequent Pacific War, Japan's technologically advanced navy decisively defeats the United States, and grants all of the former European and American colonies in Asia full independence. Later on, Yamamoto convinces Japan to join forces with the United States and Britain to defeat Nazi Germany. The series was criticized outside Japan as a whitewash of Imperial Japan's intentions towards its neighbors, and distancing itself from the wartime alliance with Nazi Germany. In Neal Stephenson's 1999 book Cryptonomicon, Yamamoto's final moments are depicted, with him realizing that Japan's naval codes have been broken and that he must inform headquarters. In the 2001 film Pearl Harbor, Yamamoto was portrayed by Oscar-nominated Japanese-born American actor Mako Iwamatsu. Like Tora! Tora! Tora!, this film also features a version of the sleeping giant quote. In the 2004 anime series Zipang, Yamamoto (voiced by Bunmei Tobayama) works to develop the uneasy partnership with the crew of the JMSDF Mirai, which has been transported back sixty years through time to the year 1942. In the Axis of Time trilogy by author John Birmingham, after a naval task force from the year 2021 is accidentally transported back through time to 1942, Yamamoto assumes a leadership role in the dramatic alteration of Japan's war strategy. In The West Wing episode "We Killed Yamamoto", the Chairman of the Joint Chiefs of Staff uses the killing of Yamamoto to advocate for an assassination. In Douglas Niles' 2007 book MacArthur's War: A Novel of the Invasion of Japan (written with Michael Dobson), which focuses on General Douglas MacArthur and an alternate history of the Pacific War (following a considerably different outcome of the Battle of Midway), Yamamoto is portrayed sympathetically, with much of the action in the Japanese government seen through his eyes, though he could not change the major decisions of Japan in World War II. In Toei's 2011 war film Rengō Kantai Shirei Chōkan: Yamamoto Isoroku (Blu-Ray titles:- English "The Admiral"; German "Der Admiral"), Yamamoto was portrayed by Kōji Yakusho. The film portrays his career from Pearl Harbor to his death in Operation Vengeance. In Robert Conroy's 2011 book Rising Sun, Yamamoto directs the IJN to launch a series of attacks on the American West Coast, in the hope the United States can be convinced to sue for peace and securing Japan's place as a world power; but cannot escape his lingering fear the war will ultimately doom Japan. In the 2019 motion picture Midway, Yamamoto is portrayed by Etsushi Toyokawa.
[ { "paragraph_id": 0, "text": "Isoroku Yamamoto (山本 五十六, Yamamoto Isoroku, April 4, 1884 – April 18, 1943) was a Marshal Admiral of the Imperial Japanese Navy (IJN) and the commander-in-chief of the Combined Fleet during World War II.", "title": "" }, { "paragraph_id": 1, "text": "Yamamoto held several important posts in the Imperial Navy, and undertook many of its changes and reorganizations, especially its development of naval aviation. He was the commander-in-chief during the early years of the Pacific War and oversaw major engagements including the attack on Pearl Harbor and the Battle of Midway.", "title": "" }, { "paragraph_id": 2, "text": "Yamamoto was killed in April 1943 after American code breakers identified his flight plans, enabling the United States Army Air Forces to shoot down his plane. His death was a major blow to Japanese military morale during World War II.", "title": "" }, { "paragraph_id": 3, "text": "Yamamoto was born Isoroku Takano (高野 五十六, Takano Isoroku) in Nagaoka, Niigata. His father, Sadayoshi Takano (高野 貞吉), was an intermediate-rank samurai of the Nagaoka Domain. \"Isoroku\" is a Japanese term meaning \"56\"; the name referred to his father's age at Isoroku's birth.", "title": "Family background" }, { "paragraph_id": 4, "text": "In 1916, Isoroku was adopted into the Yamamoto family (another family of former Nagaoka samurai) and took the Yamamoto name. It was a common practice for samurai families lacking sons to adopt suitable young men in this fashion to carry on the family name, the rank and the income that went with it. Isoroku married Reiko Mihashi in 1918; they had two sons and two daughters.", "title": "Family background" }, { "paragraph_id": 5, "text": "Yamamoto graduated from the Imperial Japanese Naval Academy in 1904, ranking 11th in his class. He then subsequently served on the armored cruiser Nisshin during the Russo-Japanese War. He was wounded at the Battle of Tsushima, losing his index and middle fingers on his left hand, as the cruiser was hit repeatedly by the Russian battle line. He returned to the Naval Staff College in 1914, emerging as a lieutenant commander in 1916. In December 1919, he was promoted to commander.", "title": "Early career" }, { "paragraph_id": 6, "text": "Yamamoto was part of the Japanese Navy establishment, who were rivals of the more aggressive Army establishment, especially the officers of the Kwantung Army. He promoted a policy of a strong fleet to project force through gunboat diplomacy, rather than a fleet used primarily for the transport of invasion land forces, as some of his political opponents in the Army wanted. This stance led him to oppose the invasion of China. He also opposed war against the United States, partly because of his studies at Harvard University (1919–1921) and his two postings as a naval attaché in Washington, D.C., where he learned to speak fluent English. Yamamoto traveled extensively in the United States during his tour of duty there, where he studied American customs and business practices.", "title": "1920s and 1930s" }, { "paragraph_id": 7, "text": "He was promoted to captain in 1923. On February 13, 1924, Captain Yamamoto was part of the Japanese delegation visiting the United States Naval War College. Later that year, he changed his specialty from gunnery to naval aviation. His first command was the cruiser Isuzu in 1928, followed by the aircraft carrier Akagi.", "title": "1920s and 1930s" }, { "paragraph_id": 8, "text": "He participated in the London Naval Conference 1930 as a rear admiral and the London Naval Conference 1935 as a vice admiral, as the growing military influence on the government at the time deemed that a career military specialist needed to accompany the diplomats to the arms limitations talks. Yamamoto was a strong proponent of naval aviation and served as head of the Aeronautics Department, before accepting a post as commander of the First Carrier Division. Yamamoto opposed the Japanese invasion of northeast China in 1931, the subsequent full-scale land war with China in 1937, and the Tripartite Pact with Nazi Germany and Fascist Italy in 1940. As Deputy Navy Minister, he apologized to United States Ambassador Joseph C. Grew for the bombing of the gunboat USS Panay in December 1937. These issues made him a target of assassination threats by pro-war militarists.", "title": "1920s and 1930s" }, { "paragraph_id": 9, "text": "Throughout 1938, many young army and naval officers began to speak publicly against Yamamoto and certain other Japanese admirals, such as Mitsumasa Yonai and Shigeyoshi Inoue, for their strong opposition to a tripartite pact with Nazi Germany and Fascist Italy, which the admirals saw as inimical to \"Japan's natural interests\". Yamamoto received a steady stream of hate mail and death threats from Japanese nationalists. His reaction to the prospect of death by assassination was passive and accepting. The admiral wrote:", "title": "1920s and 1930s" }, { "paragraph_id": 10, "text": "To die for Emperor and Nation is the highest hope of a military man. After a brave hard fight the blossoms are scattered on the fighting field. But if a person wants to take a life instead, still the fighting man will go to eternity for Emperor and country. One man's life or death is a matter of no importance. All that matters is the Empire. As Confucius said, \"They may crush cinnabar, yet they do not take away its color; one may burn a fragrant herb, yet it will not destroy the scent.\" They may destroy my body, yet they will not take away my will.", "title": "1920s and 1930s" }, { "paragraph_id": 11, "text": "The Japanese Army, annoyed at Yamamoto's unflinching opposition to a Rome-Berlin-Tokyo treaty, dispatched military police to \"guard\" him, a ruse by the Army to keep an eye on him. He was later reassigned from the naval ministry to sea as the commander-in-chief of the Combined Fleet on August 30, 1939. This was done as one of the last acts of acting Navy Minister Mitsumasa Yonai, under Baron Hiranuma Kiichirō's short-lived administration. It was done partly to make it harder for assassins to target Yamamoto. Yonai was certain that if Yamamoto remained ashore, he would be killed before the year [1939] ended.", "title": "1920s and 1930s" }, { "paragraph_id": 12, "text": "Yamamoto was promoted to admiral on November 15, 1940. This was in spite of the fact that when Hideki Tojo was appointed Prime Minister on October 18, 1941, many political observers thought that Yamamoto's career was essentially over. Tojo had been Yamamoto's old opponent from the time when the latter served as Japan's deputy naval minister and Tojo was the prime mover behind Japan's takeover of Manchuria. It was believed that Yamamoto would be appointed to command the Yokosuka Naval Base, \"a nice safe demotion with a big house and no power at all\". However, after a brief stint in the post, a new Japanese cabinet was announced, and Yamamoto found himself returned to his position of power despite his open conflict with Tojo and other members of the Army's oligarchy who favored war with the European powers and the United States.", "title": "1940–1941" }, { "paragraph_id": 13, "text": "Two of the main reasons for Yamamoto's political survival were his immense popularity within the fleet, where he commanded the respect of his men and officers, and his close relations with the imperial family. He also had the acceptance of Japan's naval hierarchy:", "title": "1940–1941" }, { "paragraph_id": 14, "text": "There was no officer more competent to lead the Combined Fleet to victory than Admiral Yamamoto. His daring plan for the Pearl Harbor attack had passed through the crucible of the Japanese naval establishment, and after many expressed misgivings, his fellow admirals had realized that Yamamoto spoke no more than the truth when he said that Japan's hope for victory in this [upcoming] war was limited by time and oil. Every sensible officer of the navy was well aware of the perennial oil problems. Also, it had to be recognized that if the enemy could seriously disturb Japanese merchant shipping, then the fleet would be endangered even more.", "title": "1940–1941" }, { "paragraph_id": 15, "text": "Consequently, Yamamoto stayed in his post. With Tojo now in charge of Japan's highest political office, it became clear the Army would lead the Navy into a war about which Yamamoto had serious reservations. He wrote to an ultranationalist:", "title": "1940–1941" }, { "paragraph_id": 16, "text": "Should hostilities once break out between Japan and the United States, it would not be enough that we take Guam and the Philippines, nor even Hawaii and San Francisco. To make victory certain, we would have to march into Washington and dictate the terms of peace in the White House. I wonder if our politicians [who speak so lightly of a Japanese-American war] have confidence as to the final outcome and are prepared to make the necessary sacrifices.", "title": "1940–1941" }, { "paragraph_id": 17, "text": "This quote was spread by the militarists, minus the last sentence, so it was interpreted in America as a boast that Japan would conquer the entire continental United States. The omitted sentence showed Yamamoto's counsel of caution towards a war that could cost Japan dearly. Nevertheless, Yamamoto accepted the reality of impending war and planned for a quick victory by destroying the United States Pacific Fleet at Pearl Harbor in a preventive strike, while simultaneously thrusting into the oil- and rubber-rich areas of Southeast Asia, especially the Dutch East Indies, Borneo, and Malaya. In naval matters, Yamamoto opposed the building of the super battleships Yamato and Musashi as an unwise investment of resources.", "title": "1940–1941" }, { "paragraph_id": 18, "text": "Yamamoto was responsible for a number of innovations in Japanese naval aviation. Although remembered for his association with aircraft carriers, Yamamoto did more to influence the development of land-based naval aviation, particularly the Mitsubishi G3M and G4M medium bombers. His demand for great range and the ability to carry a torpedo was intended to conform to Japanese conceptions of bleeding the American fleet as it advanced across the Pacific. The planes did achieve long range, but long-range fighter escorts were not available. These planes were lightly constructed and when fully fueled, they were especially vulnerable to enemy fire. This earned the G4M the sardonic nickname the \"flying cigarette lighter\". Yamamoto would eventually die in one of these aircraft.", "title": "1940–1941" }, { "paragraph_id": 19, "text": "The range of the G3M and G4M contributed to a demand for great range in a fighter aircraft. This partly drove the requirements for the A6M Zero, which was as noteworthy for its range as for its maneuverability. Both qualities were again purchased at the expense of light construction and flammability that later contributed to the A6M's high casualty rates as the war progressed.", "title": "1940–1941" }, { "paragraph_id": 20, "text": "As Japan moved toward war during 1940, Yamamoto gradually moved toward strategic as well as tactical innovation, again with mixed results. Prompted by talented young officers such as Lieutenant Commander Minoru Genda, Yamamoto approved the reorganization of Japanese carrier forces into the First Air Fleet, a consolidated striking force that gathered Japan's six largest carriers into one unit. This innovation gave great striking capacity, but also concentrated the vulnerable carriers into a compact target. Yamamoto also oversaw the organization of a similar large land-based organization in the 11th Air Fleet, which would later use the G3M and G4M to neutralize American air forces in the Philippines and sink the British Force Z.", "title": "1940–1941" }, { "paragraph_id": 21, "text": "In January 1941, Yamamoto went even further and proposed a radical revision of Japanese naval strategy. For two decades, in keeping with the doctrine of Captain Alfred T. Mahan, the Naval General Staff had planned in terms of Japanese light surface forces, submarines, and land-based air units whittling down the American fleet as it advanced across the Pacific until the Japanese Navy engaged it in a climactic Kantai Kessen (\"decisive battle\") in the northern Philippine Sea (between the Ryukyu Islands and the Marianas), with battleships fighting in traditional battle lines.", "title": "1940–1941" }, { "paragraph_id": 22, "text": "Correctly pointing out this plan had never worked even in Japanese war games, and painfully aware of American strategic advantages in military production capacity, Yamamoto proposed instead to seek parity with the Americans by first reducing their forces with a preventive strike, then following up with a \"decisive battle\" fought offensively, rather than defensively. Yamamoto hoped, but probably did not believe, that if the Americans could be dealt terrific blows early in the war, they might be willing to negotiate an end to the conflict. The Naval General Staff proved reluctant to go along, and Yamamoto was eventually driven to capitalize on his popularity in the fleet by threatening to resign to get his way. Admiral Osami Nagano and the Naval General Staff eventually caved in to this pressure, but only insofar as approving the attack on Pearl Harbor.", "title": "1940–1941" }, { "paragraph_id": 23, "text": "In January 1941 Yamamoto began developing a plan to attack the American base in Pearl Harbor, Hawaii. which the Japanese continued to refine during the next months. On November 5, 1941, Yamamoto in his \"Top Secret Operation Order no. 1\" issued to the Combined Fleet, the Empire of Japan must drive out Britain and America from Greater East Asia and hasten the settlement of China, whereas, in the event that Britain and America were driven out from the Philippines and Dutch East Indies, an independent, self-supporting economic entity will be firmly established—mirroring the principle of the Greater East Asia Co-Prosperity Sphere in another personification.", "title": "1940–1941" }, { "paragraph_id": 24, "text": "Two days later, he set the date for the intended surprise attack in Pearl Harbor and that would be on December 7 for one simple reason: it was a Sunday, the day that American military personnel would be least alert to an attack.", "title": "1940–1941" }, { "paragraph_id": 25, "text": "The First Air Fleet commenced preparations for the Pearl Harbor raid, solving a number of technical problems along the way, including how to launch torpedoes in the shallow waters of Pearl Harbor and how to craft armor-piercing bombs by machining down battleship gun projectiles.", "title": "1940–1941" }, { "paragraph_id": 26, "text": "Although the United States and Japan were officially at peace, the First Air Fleet of six carriers attacked on December 7, 1941, launching 353 aircraft against Pearl Harbor and other locations within Honolulu in two waves. The attack was a success according to the parameters of the mission, which sought to sink at least four American battleships and prevent the United States from interfering in Japan's southward advance for at least six months. Three American aircraft carriers were also considered a choice target, but these were at sea at the time.", "title": "1940–1941" }, { "paragraph_id": 27, "text": "In the end, four American battleships were sunk, four were damaged, and eleven other cruisers, destroyers, and auxiliaries were sunk or seriously damaged, 188 American aircraft were destroyed and 159 others damaged, and 2,403 people were killed and 1,178 others wounded. The Japanese lost 64 servicemen and only 29 aircraft, with 74 others damaged by anti-aircraft fire from the ground. The damaged aircraft were disproportionately dive and torpedo bombers, seriously reducing the ability to exploit the first two waves' success, so the commander of the First Air Fleet, Naval Vice Admiral Chuichi Nagumo, withdrew. Yamamoto later lamented Nagumo's failure to seize the initiative to seek out and destroy the American carriers or further bombard various strategically important facilities on Oahu.", "title": "1940–1941" }, { "paragraph_id": 28, "text": "Nagumo had absolutely no idea where the American carriers were, and remaining on station while his forces looked for them ran the risk of his own forces being found first and attacked while his aircraft were absent searching. In any case, insufficient daylight remained after recovering the aircraft from the first two waves for the carriers to launch and recover a third before dark, and Nagumo's escorting destroyers lacked the fuel capacity to loiter long. Much has been made of Yamamoto's hindsight, but in keeping with Japanese military tradition to not criticize the commander on the spot, he did not punish Nagumo for his withdrawal.", "title": "1940–1941" }, { "paragraph_id": 29, "text": "On the strategic, moral, and political level, the attack was a disaster for Japan, rousing Americans' thirst for revenge due to what is famously called a \"sneak attack\". The shock of the attack, coming in an unexpected place with devastating results and without a declaration of war, galvanized the American public's determination to avenge the attack. When asked by Prime Minister Fumimaro Konoe in mid-1941 about the outcome of a possible war with the United States, Yamamoto made a well-known and prophetic statement: If ordered to fight, he said, \"I shall run wild considerably for the first six months or a year, but I have utterly no confidence for the second and third years.\" His prediction would be validated, as Japan easily conquered territories and islands in Asia and the Pacific for the first six months of the war, before suffering a major defeat at the Battle of Midway on June 4–7, 1942, which ultimately tilted the balance of power in the Pacific toward the United States.", "title": "1940–1941" }, { "paragraph_id": 30, "text": "With the American fleet largely neutralized at Pearl Harbor, Yamamoto's Combined Fleet turned to the task of executing the larger Japanese war plan devised by the Imperial Japanese Army and Navy General Staff. The First Air Fleet made a circuit of the Pacific, striking American, Australian, Dutch, and British installations from Wake Island to Australia to Ceylon in the Indian Ocean. The 11th Air Fleet caught the United States Fifth Air Force on the ground in the Philippines hours after Pearl Harbor, and then sank the British Force Z's battleship HMS Prince of Wales and battlecruiser HMS Repulse at sea.", "title": "December 1941 – May 1942" }, { "paragraph_id": 31, "text": "Under Yamamoto's able subordinates, Vice Admirals Jisaburō Ozawa, Nobutake Kondō, and Ibō Takahashi, the Japanese swept the inadequate remaining American, British, Dutch and Australian naval assets from the Dutch East Indies in a series of amphibious landings and surface naval battles culminating in the Battle of the Java Sea on February 27, 1942. Along with the occupation of the Dutch East Indies came the fall of Singapore on February 15, and the eventual reduction of the remaining American-Filipino defensive positions in the Philippines on the Bataan peninsula on April 9 and Corregidor Island on May 6. The Japanese had secured their oil- and rubber-rich \"southern resources area\".", "title": "December 1941 – May 1942" }, { "paragraph_id": 32, "text": "By late March, having achieved their initial aims with surprising speed and little loss, albeit against enemies ill-prepared to resist them, the Japanese paused to consider their next moves. Yamamoto and a few Japanese military leaders and officials waited, hoping that the United States or Great Britain would negotiate an armistice or a peace treaty to end the war. But when the British, as well as the Americans, expressed no interest in negotiating, Japanese thoughts turned to securing their newly seized territory and acquiring more with an eye to driving one or more of their enemies out of the war.", "title": "December 1941 – May 1942" }, { "paragraph_id": 33, "text": "Competing plans were developed at this stage, including thrusts to the west against British India, south against Australia, and east against the United States. Yamamoto was involved in this debate, supporting different plans at different times with varying degrees of enthusiasm and for varying purposes, including \"horse-trading\" for support of his own objectives.", "title": "December 1941 – May 1942" }, { "paragraph_id": 34, "text": "Plans included ideas as ambitious as invading India or Australia, or seizing Hawaii. These grandiose ventures were inevitably set aside, as the Army could not spare enough troops from China for the first two, which would require a minimum of 250,000 men, nor shipping to support the latter two (transports were allocated separately to the Navy and Army, and jealously guarded). Instead, the Imperial General Staff supported an army thrust into Burma in hopes of linking up with Indian nationalists revolting against British rule, and attacks in New Guinea and the Solomon Islands designed to imperil Australia's lines of communication with the United States. Yamamoto argued for a decisive offensive strike in the east to finish off the American fleet, but the more conservative Naval General Staff officers were unwilling to risk it.", "title": "December 1941 – May 1942" }, { "paragraph_id": 35, "text": "On April 18, in the midst of these debates, the Doolittle Raid struck Tokyo and surrounding areas, demonstrating the threat posed by American aircraft carriers, and giving Yamamoto an event he could exploit to get his way, and further debate over military strategy came to a quick end. The Naval General Staff agreed to Yamamoto's Midway Island (MI) Operation, subsequent to the first phase of the operations against Australia's link with America, and concurrent with its plan to invade the Aleutian Islands.", "title": "December 1941 – May 1942" }, { "paragraph_id": 36, "text": "Yamamoto rushed planning for the Midway and Aleutians missions, while dispatching a force under Vice Admiral Takeo Takagi, including the Fifth Carrier Division (the large new carriers Shōkaku and Zuikaku), to support the effort to seize the islands of Tulagi and Guadalcanal for seaplane and airplane bases, and the town of Port Moresby on Papua New Guinea's south coast facing Australia.", "title": "December 1941 – May 1942" }, { "paragraph_id": 37, "text": "The Port Moresby (MO) Operation proved an unwelcome setback. Although Tulagi and Guadalcanal were taken, the Port Moresby invasion fleet was compelled to turn back when Takagi clashed with an American carrier task force in the Battle of the Coral Sea in early May. Although the Japanese sank the carrier USS Lexington and damaged the USS Yorktown, the Americans damaged the carrier Shōkaku so badly that she required dockyard repairs, and the Japanese lost the light carrier Shoho. Just as importantly, Japanese operational mishaps and American fighters and anti-aircraft fire devastated the dive bomber and torpedo plane formations of both Shōkaku's and Zuikaku's air groups. These losses sidelined Zuikaku while she awaited replacement aircraft and aircrews, and saw to tactical integration and training. These two ships would be sorely missed a month later at Midway.", "title": "December 1941 – May 1942" }, { "paragraph_id": 38, "text": "Yamamoto's plan for Midway Island was an extension of his efforts to knock the American Pacific Fleet out of action long enough for Japan to fortify its defensive perimeter in the Pacific island chains. Yamamoto felt it necessary to seek an early, offensive decisive battle.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 39, "text": "This plan was long believed to have been to draw American attention—and possibly carrier forces—north from Pearl Harbor by sending his Fifth Fleet (one carrier, one light carrier, four battleships, eight cruisers, 25 destroyers, and four transports) against the Aleutians, raiding Dutch Harbor on Unalaska Island and invading the more distant islands of Kiska and Attu.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 40, "text": "While Fifth Fleet attacked the Aleutians, First Mobile Force (four carriers, two battleships, three cruisers, and 12 destroyers) would attack Midway and destroy its air force. Once this was neutralized, Second Fleet (one light carrier, two battleships, 10 cruisers, 21 destroyers, and 11 transports) would land 5,000 troops to seize the atoll from the United States Marines.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 41, "text": "The seizure of Midway was expected to draw the American carriers west into a trap where the First Mobile Force would engage and destroy them. Afterwards, First Fleet (one light carrier, three battleships, one light cruiser and nine destroyers), in conjunction with elements of Second Fleet, would mop up remaining US surface forces and complete the destruction of the American Pacific Fleet.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 42, "text": "To guard against failure, Yamamoto initiated two security measures. The first was an aerial reconnaissance mission (Operation K) over Pearl Harbor to ascertain if the American carriers were there. The second was a picket line of submarines to detect the movement of enemy carriers toward Midway in time for First Mobile Force, First Fleet, and Second Fleet to combine against it. In the event, the first measure was aborted and the second delayed until after the American carriers had already sortied.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 43, "text": "The plan was a compromise and hastily prepared, apparently so it could be launched in time for the anniversary of the Battle of Tsushima, but appeared well thought out, well organized, and finely timed when viewed from a Japanese viewpoint. Against four fleet carriers, two light carriers, seven battleships, 14 cruisers and 42 destroyers likely to be in the area of the main battle, the United States could field only three carriers, eight cruisers, and 15 destroyers. The disparity appeared crushing. Only in numbers of carrier decks, available aircraft, and submarines was there near parity between the two sides. Despite various mishaps developed in the execution, it appeared that—barring something unforeseen—Yamamoto held all the cards.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 44, "text": "Unknown to Yamamoto, the Americans had learned of Japanese plans thanks to the code breaking of Japanese naval code D (known to the US as JN-25). As a result, Admiral Chester Nimitz, the Pacific Fleet commander, was able to place his outnumbered forces in a position to conduct their own ambush. By Nimitz's calculation, his three available carrier decks, plus Midway, gave him rough parity with Nagumo's First Mobile Force.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 45, "text": "Following a nuisance raid by Japanese flying boats in May, Nimitz dispatched a minesweeper to guard the intended refueling point for Operation K near French Frigate Shoals, causing the reconnaissance mission to be aborted and leaving Yamamoto ignorant of whether the Pacific Fleet carriers were still at Pearl Harbor. It remains unclear why Yamamoto permitted the earlier attack, and why his submarines did not sortie sooner, as reconnaissance was essential to success at Midway. Nimitz also dispatched his carriers toward Midway early, and they passed the Japanese submarines en route to their picket line positions. Nimitz's carriers positioned themselves to ambush the Kidō Butai (striking force) when it struck Midway. A token cruiser and destroyer force was sent toward the Aleutians, but otherwise Nimitz ignored them. On June 4, 1942, days before Yamamoto expected them to interfere in the Midway operation, American carrier-based aircraft destroyed the four carriers of the Kidō Butai, catching the Japanese carriers at especially vulnerable times.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 46, "text": "With his air power destroyed and his forces not yet concentrated for a fleet battle, Yamamoto maneuvered his remaining forces, still strong on paper, to trap the American forces. He was unable to do so because his initial dispositions had placed his surface combatants too far from Midway, and because Admiral Raymond Spruance prudently withdrew to the east to further defend Midway Island, believing (based on a mistaken submarine report) the Japanese still intended to invade. Not knowing several battleships, including the powerful Yamato, were in the Japanese order of battle, he did not comprehend the severe risk of a night surface battle, in which his carriers and cruisers would be at a disadvantage. However, his move to the east avoided that possibility. Correctly perceiving he had lost and could not bring surface forces into action, Yamamoto withdrew. The defeat marked the high tide of Japanese expansion.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 47, "text": "Yamamoto's plan has been the subject of much criticism. Some historians state it violated the principle of concentration of force and was overly complex. Others point to similarly complex Allied operations, such as Operation MB8, that were successful, and note the extent to which the American intelligence coup derailed the operation before it began. Had Yamamoto's dispositions not denied Nagumo adequate pre-attack reconnaissance assets, both the American cryptanalytic success and the unexpected appearance of the American carriers would have been irrelevant.", "title": "Battle of Midway, June 1942" }, { "paragraph_id": 48, "text": "The Battle of Midway checked Japanese momentum, but the Japanese Navy was still a powerful force, capable of regaining the initiative. It planned to resume the thrust with Operation FS, aimed at eventually taking Fiji and Samoa to cut the American lifeline to Australia.", "title": "Actions after Midway" }, { "paragraph_id": 49, "text": "Yamamoto remained as commander-in-chief, retained at least partly to avoid diminishing the morale of the Combined Fleet. However, he had lost face as a result of the Midway defeat, and the Naval General Staff were disinclined to indulge in further gambles. This reduced Yamamoto to pursuing the classic defensive \"decisive battle strategy\" he had attempted to avoid.", "title": "Actions after Midway" }, { "paragraph_id": 50, "text": "Yamamoto committed Combined Fleet units to a series of small attrition actions across the south and central Pacific that stung the Americans, but in return suffered losses he could ill afford. Three major efforts to beat the Americans moving on Guadalcanal precipitated a pair of carrier battles that Yamamoto commanded personally: the Battles of the Eastern Solomons and Santa Cruz Islands in September and October, respectively, and finally a pair of wild surface engagements in November, all timed to coincide with Japanese Army pushes. The effort was wasted when the Army could not hold up its end of the operation. Yamamoto's naval forces won a few victories and inflicted considerable losses and damage to the American fleet in several battles around Guadalcanal which included the Battles of Savo Island, Cape Esperance, and Tassafaronga, but he could never draw the United States into a decisive fleet action. As a result, Japanese naval strength declined.", "title": "Actions after Midway" }, { "paragraph_id": 51, "text": "To boost morale following the defeat at Guadalcanal, Yamamoto decided to make an inspection tour throughout the South Pacific. It was during this tour that U.S. officials commenced an operation to kill him. On April 14, 1943, the United States naval intelligence effort, codenamed \"Magic\", intercepted and decrypted a message containing specifics of Yamamoto's tour, including arrival and departure times and locations, as well as the number and types of aircraft that would transport and accompany him on the journey. Yamamoto, the itinerary revealed, would be flying from Rabaul to Balalae Airfield, on an island near Bougainville in the Solomon Islands, on the morning of April 18, 1943.", "title": "Death" }, { "paragraph_id": 52, "text": "President Franklin D. Roosevelt may have authorized Secretary of the Navy Frank Knox to \"get Yamamoto\", but no official record of such an order exists, and sources disagree whether he did so. Knox essentially let Admiral Chester W. Nimitz make the decision. Nimitz first consulted Admiral William Halsey Jr., Commander, South Pacific, and then authorized the mission on April 17 to intercept and shoot down Yamamoto's flight en route. A squadron of United States Army Air Forces Lockheed P-38 Lightning aircraft were assigned the task as only they possessed sufficient range. Select pilots from three units were informed that they were intercepting an \"important high officer\", with no specific name given.", "title": "Death" }, { "paragraph_id": 53, "text": "On the morning of April 18, despite urging by local commanders to cancel the trip for fear of ambush, Yamamoto's two Mitsubishi G4M bombers, used as fast transport aircraft without bombs, left Rabaul as scheduled for the 315 mi (507 km) trip. Sixteen P-38s intercepted the flight over Bougainville, and a dogfight ensued between them and the six escorting Mitsubishi A6M Zeroes. First Lieutenant Rex T. Barber engaged the first of the two Japanese transports, which turned out to be T1-323 (Yamamoto's aircraft). He fired on the aircraft until it began to spew smoke from its left engine. Barber turned away to attack the other transport as Yamamoto's plane crashed into the jungle.", "title": "Death" }, { "paragraph_id": 54, "text": "Yamamoto's body, along with the crash site, was found the next day in the jungle of the island of Bougainville by a Japanese search-and-rescue party, led by army engineer Lieutenant Tsuyoshi Hamasuna. According to Hamasuna, Yamamoto had been thrown clear of the plane's wreckage, his white-gloved hand grasping the hilt of his katana, still upright in his seat under a tree. Hamasuna said Yamamoto was instantly recognizable, head dipped down as if deep in thought. A post-mortem disclosed that Yamamoto had received two .50-caliber bullet wounds, one to the back of his left shoulder and another to the left side of his lower jaw that exited above his right eye. The Japanese navy doctor examining the body determined that the head wound had killed Yamamoto. The more violent details of Yamamoto's death were hidden from the Japanese public. The medical report was changed \"on orders from above\", according to biographer Hiroyuki Agawa.", "title": "Death" }, { "paragraph_id": 55, "text": "Yamamoto's staff cremated his remains at Buin, Papua New Guinea, and his ashes were returned to Tokyo aboard the battleship Musashi, his last flagship. He was given a full state funeral on June 5, 1943, where he received, posthumously, the title of Marshal Admiral and was awarded the Order of the Chrysanthemum (1st Class). He was also awarded Nazi Germany's Knight's Cross of the Iron Cross with Oak Leaves and Swords. Some of his ashes were buried in the public Tama Cemetery, Tokyo (多摩霊園) and the remainder at his ancestral burial grounds at the temple of Chuko-ji in Nagaoka City. He was succeeded as commander-in-chief of the Combined Fleet by Admiral Mineichi Koga.", "title": "Death" }, { "paragraph_id": 56, "text": "In the years following Admiral Yamamoto's death, debate has arisen regarding whether he was assassinated rather than legally killed. Colonel Hays Parks, one of the U.S. government's foremost legal experts, wrote in his \"Memorandum of Law: Executive Order 12333 and Assassination\" that Admiral Yamamoto was killed because of his status as an enemy combatant in compliance with the applicable laws of war. Parks wrote that \"enemy combatants are legitimate targets at all times, regardless of their duties or activities at the time of their attack. Such attacks do not constitute assassination unless carried out in a 'treacherous' manner, as prohibited by article 23(b) of the Annex to the Hague Regulations (Hague Convention IV) of 1907.\"", "title": "Death" }, { "paragraph_id": 57, "text": "Yamamoto practiced calligraphy. He and his wife, Reiko, had four children: two sons and two daughters. Yamamoto was an avid gambler, enjoying Go, shogi, billiards, bridge, mahjong, poker, and other games that tested his wits and sharpened his mind. He frequently made jokes about moving to Monaco and starting his own casino.", "title": "Personal life" }, { "paragraph_id": 58, "text": "He enjoyed the company of geisha, and his wife Reiko revealed to the Japanese public in 1954 that Yamamoto was closer to his favorite geisha Kawai Chiyoko than to her, which stirred some controversy. His funeral procession passed by Kawai's quarters on the way to the cemetery. Yamamoto was close friends with Teikichi Hori, a Navy admiral and Yamamoto's classmate from the Imperial Japanese Naval Academy who was purged from the Navy for supporting the Washington Naval Treaty. Before and during the war Yamamoto frequently corresponded with Hori, these personal letters would become the subject of the NHK documentary The Truth of Yamamoto.", "title": "Personal life" }, { "paragraph_id": 59, "text": "The claim that Yamamoto was a Catholic is likely due to confusion with retired Admiral Shinjiro Stefano Yamamoto, who was a decade older than Isoroku, and died in 1942.", "title": "Personal life" }, { "paragraph_id": 60, "text": "Since the end of the Second World War, a number of Japanese and American films have depicted the character of Isoroku Yamamoto.", "title": "In popular culture" }, { "paragraph_id": 61, "text": "One of the most notable films is the 1970 movie Tora! Tora! Tora!, which stars Japanese actor Sō Yamamura as Yamamoto, who states after the attack on Pearl Harbor:", "title": "In popular culture" }, { "paragraph_id": 62, "text": "I fear that all we have done is to awaken a sleeping giant and fill him with a terrible resolve.", "title": "In popular culture" }, { "paragraph_id": 63, "text": "The first film to feature Yamamoto was Toho's 1953 film Eagle of the Pacific, in which Yamamoto was portrayed by Denjirō Ōkōchi.", "title": "In popular culture" }, { "paragraph_id": 64, "text": "The 1960 film The Gallant Hours depicts the battle of wits between Vice-Admiral William Halsey, Jr. and Yamamoto from the start of the Guadalcanal Campaign in August 1942 to Yamamoto's death in April 1943. The film, however, portrays Yamamoto's death as occurring in November 1942, the day after the Naval Battle of Guadalcanal, and the P-38 aircraft that killed him as coming from Guadalcanal.", "title": "In popular culture" }, { "paragraph_id": 65, "text": "In Daiei Studios's 1969 film Aa, kaigun (later released in the United States as Gateway to Glory), Yamamoto was portrayed by Shōgo Shimada.", "title": "In popular culture" }, { "paragraph_id": 66, "text": "Professional wrestler Harold Watanabe adopted the villainous Japanese gimmick of Tojo Yamamoto in reference to both Yamamoto and Hideki Tojo.", "title": "In popular culture" }, { "paragraph_id": 67, "text": "Award-winning Japanese actor Toshiro Mifune (star of The Seven Samurai) portrayed Yamamoto in three films:", "title": "In popular culture" }, { "paragraph_id": 68, "text": "A fictionalized version of Yamamoto's death was portrayed in the Baa Baa Black Sheep episode \"The Hawk Flies on Sunday\", though only photos of Yamamoto were shown. In this episode, set much later in the war than in real life, the Black Sheep, a Marine Corsair squadron, joins an army squadron of P-51 Mustangs. The Marines intercepted fighter cover while the army shot down Yamamoto.", "title": "In popular culture" }, { "paragraph_id": 69, "text": "In Shūe Matsubayashi's 1981 film Rengō kantai (lit. \"Combined Fleet\", later released in the United States as The Imperial Navy), Yamamoto was portrayed by Keiju Kobayashi.", "title": "In popular culture" }, { "paragraph_id": 70, "text": "In the 1993 OVA series Konpeki no Kantai (lit. Deep Blue Fleet), instead of dying in the plane crash, Yamamoto blacks out and suddenly wakes up as his younger self, Isoroku Takano, after the Battle of Tsushima in 1905. His memory from the original timeline intact, Yamamoto uses his knowledge of the future to help Japan become a stronger military power, eventually launching a coup d'état against Hideki Tōjō's government. In the subsequent Pacific War, Japan's technologically advanced navy decisively defeats the United States, and grants all of the former European and American colonies in Asia full independence. Later on, Yamamoto convinces Japan to join forces with the United States and Britain to defeat Nazi Germany. The series was criticized outside Japan as a whitewash of Imperial Japan's intentions towards its neighbors, and distancing itself from the wartime alliance with Nazi Germany.", "title": "In popular culture" }, { "paragraph_id": 71, "text": "In Neal Stephenson's 1999 book Cryptonomicon, Yamamoto's final moments are depicted, with him realizing that Japan's naval codes have been broken and that he must inform headquarters.", "title": "In popular culture" }, { "paragraph_id": 72, "text": "In the 2001 film Pearl Harbor, Yamamoto was portrayed by Oscar-nominated Japanese-born American actor Mako Iwamatsu. Like Tora! Tora! Tora!, this film also features a version of the sleeping giant quote.", "title": "In popular culture" }, { "paragraph_id": 73, "text": "In the 2004 anime series Zipang, Yamamoto (voiced by Bunmei Tobayama) works to develop the uneasy partnership with the crew of the JMSDF Mirai, which has been transported back sixty years through time to the year 1942.", "title": "In popular culture" }, { "paragraph_id": 74, "text": "In the Axis of Time trilogy by author John Birmingham, after a naval task force from the year 2021 is accidentally transported back through time to 1942, Yamamoto assumes a leadership role in the dramatic alteration of Japan's war strategy.", "title": "In popular culture" }, { "paragraph_id": 75, "text": "In The West Wing episode \"We Killed Yamamoto\", the Chairman of the Joint Chiefs of Staff uses the killing of Yamamoto to advocate for an assassination.", "title": "In popular culture" }, { "paragraph_id": 76, "text": "In Douglas Niles' 2007 book MacArthur's War: A Novel of the Invasion of Japan (written with Michael Dobson), which focuses on General Douglas MacArthur and an alternate history of the Pacific War (following a considerably different outcome of the Battle of Midway), Yamamoto is portrayed sympathetically, with much of the action in the Japanese government seen through his eyes, though he could not change the major decisions of Japan in World War II.", "title": "In popular culture" }, { "paragraph_id": 77, "text": "In Toei's 2011 war film Rengō Kantai Shirei Chōkan: Yamamoto Isoroku (Blu-Ray titles:- English \"The Admiral\"; German \"Der Admiral\"), Yamamoto was portrayed by Kōji Yakusho. The film portrays his career from Pearl Harbor to his death in Operation Vengeance.", "title": "In popular culture" }, { "paragraph_id": 78, "text": "In Robert Conroy's 2011 book Rising Sun, Yamamoto directs the IJN to launch a series of attacks on the American West Coast, in the hope the United States can be convinced to sue for peace and securing Japan's place as a world power; but cannot escape his lingering fear the war will ultimately doom Japan.", "title": "In popular culture" }, { "paragraph_id": 79, "text": "In the 2019 motion picture Midway, Yamamoto is portrayed by Etsushi Toyokawa.", "title": "In popular culture" } ]
Isoroku Yamamoto was a Marshal Admiral of the Imperial Japanese Navy (IJN) and the commander-in-chief of the Combined Fleet during World War II. Yamamoto held several important posts in the Imperial Navy, and undertook many of its changes and reorganizations, especially its development of naval aviation. He was the commander-in-chief during the early years of the Pacific War and oversaw major engagements including the attack on Pearl Harbor and the Battle of Midway. Yamamoto was killed in April 1943 after American code breakers identified his flight plans, enabling the United States Army Air Forces to shoot down his plane. His death was a major blow to Japanese military morale during World War II.
2001-12-18T12:46:28Z
2023-12-23T18:44:50Z
[ "Template:According to whom", "Template:Unreferenced section", "Template:Convert", "Template:S-start", "Template:S-bef", "Template:Family name hatnote", "Template:Main", "Template:Cite news", "Template:Wikiquote", "Template:In lang", "Template:S-aft", "Template:Short description", "Template:Infobox military person", "Template:Sfn", "Template:Snd", "Template:Citation", "Template:ISBN", "Template:Blockquote", "Template:Page needed", "Template:Harvnb", "Template:PM20", "Template:S-ttl", "Template:S-off", "Template:Ship", "Template:USS", "Template:Commons category", "Template:Find a Grave", "Template:RP", "Template:Citation needed", "Template:'", "Template:Cbignore", "Template:S-mil", "Template:Subject bar", "Template:Nihongo", "Template:HMS", "Template:Cite book", "Template:Cite journal", "Template:Authority control", "Template:S-end", "Template:Redirect", "Template:Use mdy dates", "Template:Multiple image", "Template:Reflist", "Template:Cite web", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/Isoroku_Yamamoto
15,412
Infrared spectroscopy
Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm. Units of IR wavelength are commonly given in micrometers (formerly called "microns"), symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below. The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14,000–4,000 cm (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4,000–400 cm (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm (25–1,000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties. Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling. In particular, in the Born–Oppenheimer and harmonic approximations (i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighborhood of the equilibrium molecular geometry), the resonant frequencies are associated with the normal modes of vibration corresponding to the molecular electronic ground state potential energy surface. The resonant frequencies are also related to the strength of the bond and the mass of the atoms at either end of it. Thus, the frequency of the vibrations are associated with a particular normal mode of motion and a particular bond type. In order for a vibrational mode in a sample to be "IR active", it must be associated with changes in the molecular dipole moment. A permanent dipole is not necessary, as the rule requires only a change in dipole moment. A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms, geometrically linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As examples linear carbon dioxide (CO2) has 3 × 3 – 5 = 4, while non-linear water (H2O), has only 3 × 3 – 6 = 3. Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. carbon monoxide (CO), absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra. The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these vibrations involve only the CH2 portion: two stretching modes (ν): symmetric (νs) and antisymmetric (νas); and four bending modes: scissoring (δ), rocking (ρ), wagging (ω) and twisting (τ), as shown below. Structures that do not have the two additional X groups attached have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H atoms represent simple rotation of the whole molecule rather than vibrations within it. In case of more complex molecules, out-of-plane (γ) vibrational modes can be also present. These figures do not represent the "recoil" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms. The simplest and most important or fundamental IR bands arise from the excitations of normal modes, the simplest distortions of the molecule, from the ground state with vibrational quantum number v = 0 to the first excited state with vibrational quantum number v = 1. In some cases, overtone bands are observed. An overtone band arises from the absorption of a photon leading to a direct transition from the ground state to the second excited vibrational state (v = 2). Such a band appears at approximately twice the energy of the fundamental band for the same normal mode. Some excitations, so-called combination modes, involve simultaneous excitation of more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc. The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR is the same as the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured using a Fourier transform instrument and then a transmittance or absorbance spectrum is generated using a dedicated procedure. This technique is commonly used for analyzing samples with covalent bonds. Simple spectra are obtained from samples with few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more complex spectra. Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters. Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used). The plates are transparent to the infrared light and do not introduce any lines onto the spectra. With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment). Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the mull is applied onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the "cast film" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non-hygroscopic solvent. A drop of this solution is deposited on the surface of a KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 μm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved. In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it. A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials. It is typical to record spectrum of both the sample and a "reference". This step controls for a number of variables, e.g. infrared detector, which may affect the spectrum. The reference measurement makes it possible to eliminate the instrument influence. The appropriate "reference" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately). A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a "two-beam" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors. Nevertheless, among different absorption-based techniques which are used for gaseous species detection, Cavity ring-down spectroscopy (CRDS) can be used as a calibration-free method. The fact that CRDS is based on the measurements of photon life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference Some instruments also automatically identify the substance being measured from a store of thousands of reference spectra held in storage. Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an "interferogram", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference. An alternate method for acquiring spectra is the "dispersive" or "scanning monochromator" method. In this approach, the sample is irradiated sequentially with various single wavelengths. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called "Fellgett's advantage" or the "multiplex advantage": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called "Jacquinot's Throughput Advantage": A dispersive measurement requires detecting much lower light levels than an FTIR measurement. There are other advantages, as well as some disadvantages, but virtually all modern infrared spectrometers are FTIR instruments. Various forms of infrared microscopy exist. These include IR versions of sub-diffraction microscopy such as IR NSOM, photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR). Infrared spectroscopy is not the only method of studying molecular vibrational spectra. Raman spectroscopy involves an inelastic scattering process in which only part of the energy of an incident photon is absorbed by the molecule, and the remaining part is scattered and detected. The energy difference corresponds to absorbed vibrational energy. The selection rules for infrared and for Raman spectroscopy are different at least for some molecular symmetries, so that the two methods are complementary in that they observe vibrations of different symmetries. Another method is electron energy loss spectroscopy (EELS), in which the energy absorbed is provided by an inelastically scattered electron rather than a photon. This method is useful for studying vibrations of molecules adsorbed on a solid surface. Recently, high-resolution EELS (HREELS) has emerged as a technique for performing vibrational spectroscopy in a transmission electron microscope (TEM). In combination with the high spatial resolution of the TEM, unprecedented experiments have been performed, such as nano-scale temperature measurements, mapping of isotopically labeled molecules, mapping of phonon modes in position- and momentum-space, vibrational surface and bulk mode mapping on nanocubes, and investigations of polariton modes in van der Waals crystals. Analysis of vibrational modes that are IR-inactive but appear in inelastic neutron scattering is also possible at high spatial resolution using EELS. Although the spatial resolution of HREELs is very high, the bands are extremely broad compared to other techniques. By using computer simulations and normal mode analysis it is possible to calculate theoretical frequencies of molecules. IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below. A spectrograph is often interpreted as having two regions. In the functional region there are one to a few troughs per functional group. In the fingerprint region there are many troughs which form an intricate pattern which can be used like a fingerprint to determine the compound. For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's rule. Originally published by Richard McLean Badger in 1934, this rule states that the strength of a bond (in terms of force constant) correlates with the bond length. That is, increase in bond strength leads to corresponding bond shortening and vice versa. Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. In catalysis research it is a very useful tool to characterize the catalyst, as well as to detect intermediates and products during the catalytic reaction. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers. It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver. IR spectroscopy has been successfully used in analysis and identification of pigments in paintings and other art objects such as illuminated manuscripts. Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Modern research instruments can take infrared measurements across the range of interest as frequently as 32 times a second. This can be done whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and processes quicker and more accurate. Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics: for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc. Another important application of infrared spectroscopy is in the food industry to measure the concentration of various compounds in different food products. The instruments are now small, and can be transported, even for use in field trials. Infrared spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs. These devices detect hydrocarbon gas leaks in the transportation of natural gas and crude oil. In February 2014, NASA announced a greatly upgraded database, based on IR spectroscopy, for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. Infrared spectroscopy is an important analysis method in the recycling process of household waste plastics, and a convenient stand-off method to sort plastic of different polymers (PET, HDPE, ...). Other developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal everyday use, and NIR-spectroscopic chips that can be embedded in smartphones and various gadgets. The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm for ν(O–O) and ν(O–O), respectively. By considering the O–O bond as a spring, the frequency of absorbance can be calculated as a wavenumber [= frequency/(speed of light)] where k is the spring constant for the bond, c is the speed of light, and μ is the reduced mass of the A–B system: ( m i {\displaystyle m_{i}} is the mass of atom i {\displaystyle i} ). The reduced masses for O–O and O–O can be approximated as 8 and 9 respectively. Thus The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to Si, the lifetime becomes 27 ps. Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers. Nonlinear two-dimensional infrared spectroscopy is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research. As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins.
[ { "paragraph_id": 0, "text": "Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer (or spectrophotometer) which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance (or transmittance) on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm. Units of IR wavelength are commonly given in micrometers (formerly called \"microns\"), symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below.", "title": "" }, { "paragraph_id": 1, "text": "The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14,000–4,000 cm (0.7–2.5 μm wavelength) can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4,000–400 cm (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm (25–1,000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties.", "title": "" }, { "paragraph_id": 2, "text": "Infrared spectroscopy exploits the fact that molecules absorb frequencies that are characteristic of their structure. These absorptions occur at resonant frequencies, i.e. the frequency of the absorbed radiation matches the vibrational frequency. The energies are affected by the shape of the molecular potential energy surfaces, the masses of the atoms, and the associated vibronic coupling.", "title": "Theory" }, { "paragraph_id": 3, "text": "In particular, in the Born–Oppenheimer and harmonic approximations (i.e. when the molecular Hamiltonian corresponding to the electronic ground state can be approximated by a harmonic oscillator in the neighborhood of the equilibrium molecular geometry), the resonant frequencies are associated with the normal modes of vibration corresponding to the molecular electronic ground state potential energy surface.", "title": "Theory" }, { "paragraph_id": 4, "text": "The resonant frequencies are also related to the strength of the bond and the mass of the atoms at either end of it. Thus, the frequency of the vibrations are associated with a particular normal mode of motion and a particular bond type.", "title": "Theory" }, { "paragraph_id": 5, "text": "In order for a vibrational mode in a sample to be \"IR active\", it must be associated with changes in the molecular dipole moment. A permanent dipole is not necessary, as the rule requires only a change in dipole moment.", "title": "Theory" }, { "paragraph_id": 6, "text": "A molecule can vibrate in many ways, and each way is called a vibrational mode. For molecules with N number of atoms, geometrically linear molecules have 3N – 5 degrees of vibrational modes, whereas nonlinear molecules have 3N – 6 degrees of vibrational modes (also called vibrational degrees of freedom). As examples linear carbon dioxide (CO2) has 3 × 3 – 5 = 4, while non-linear water (H2O), has only 3 × 3 – 6 = 3.", "title": "Theory" }, { "paragraph_id": 7, "text": "Simple diatomic molecules have only one bond and only one vibrational band. If the molecule is symmetrical, e.g. N2, the band is not observed in the IR spectrum, but only in the Raman spectrum. Asymmetrical diatomic molecules, e.g. carbon monoxide (CO), absorb in the IR spectrum. More complex molecules have many bonds, and their vibrational spectra are correspondingly more complex, i.e. big molecules have many peaks in their IR spectra.", "title": "Theory" }, { "paragraph_id": 8, "text": "The atoms in a CH2X2 group, commonly found in organic compounds and where X can represent any other atom, can vibrate in nine different ways. Six of these vibrations involve only the CH2 portion: two stretching modes (ν): symmetric (νs) and antisymmetric (νas); and four bending modes: scissoring (δ), rocking (ρ), wagging (ω) and twisting (τ), as shown below. Structures that do not have the two additional X groups attached have fewer modes because some modes are defined by specific relationships to those other attached groups. For example, in water, the rocking, wagging, and twisting modes do not exist because these types of motions of the H atoms represent simple rotation of the whole molecule rather than vibrations within it. In case of more complex molecules, out-of-plane (γ) vibrational modes can be also present.", "title": "Theory" }, { "paragraph_id": 9, "text": "These figures do not represent the \"recoil\" of the C atoms, which, though necessarily present to balance the overall movements of the molecule, are much smaller than the movements of the lighter H atoms.", "title": "Theory" }, { "paragraph_id": 10, "text": "The simplest and most important or fundamental IR bands arise from the excitations of normal modes, the simplest distortions of the molecule, from the ground state with vibrational quantum number v = 0 to the first excited state with vibrational quantum number v = 1. In some cases, overtone bands are observed. An overtone band arises from the absorption of a photon leading to a direct transition from the ground state to the second excited vibrational state (v = 2). Such a band appears at approximately twice the energy of the fundamental band for the same normal mode. Some excitations, so-called combination modes, involve simultaneous excitation of more than one normal mode. The phenomenon of Fermi resonance can arise when two modes are similar in energy; Fermi resonance results in an unexpected shift in energy and intensity of the bands etc.", "title": "Theory" }, { "paragraph_id": 11, "text": "The infrared spectrum of a sample is recorded by passing a beam of infrared light through the sample. When the frequency of the IR is the same as the vibrational frequency of a bond or collection of bonds, absorption occurs. Examination of the transmitted light reveals how much energy was absorbed at each frequency (or wavelength). This measurement can be achieved by scanning the wavelength range using a monochromator. Alternatively, the entire wavelength range is measured using a Fourier transform instrument and then a transmittance or absorbance spectrum is generated using a dedicated procedure.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 12, "text": "This technique is commonly used for analyzing samples with covalent bonds. Simple spectra are obtained from samples with few IR active bonds and high levels of purity. More complex molecular structures lead to more absorption bands and more complex spectra.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 13, "text": "Gaseous samples require a sample cell with a long pathlength to compensate for the diluteness. The pathlength of the sample cell depends on the concentration of the compound of interest. A simple glass tube with length of 5 to 10 cm equipped with infrared-transparent windows at both ends of the tube can be used for concentrations down to several hundred ppm. Sample gas concentrations well below ppm can be measured with a White's cell in which the infrared light is guided with mirrors to travel through the gas. White's cells are available with optical pathlength starting from 0.5 m up to hundred meters.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 14, "text": "Liquid samples can be sandwiched between two plates of a salt (commonly sodium chloride, or common salt, although a number of other salts such as potassium bromide or calcium fluoride are also used). The plates are transparent to the infrared light and do not introduce any lines onto the spectra. With increasing technology in computer filtering and manipulation of the results, samples in solution can now be measured accurately (water produces a broad absorbance across the range of interest, and thus renders the spectra unreadable without this computer treatment).", "title": "Practical IR spectroscopy" }, { "paragraph_id": 15, "text": "Solid samples can be prepared in a variety of ways. One common method is to crush the sample with an oily mulling agent (usually mineral oil Nujol). A thin film of the mull is applied onto salt plates and measured. The second method is to grind a quantity of the sample with a specially purified salt (usually potassium bromide) finely (to remove scattering effects from large crystals). This powder mixture is then pressed in a mechanical press to form a translucent pellet through which the beam of the spectrometer can pass. A third technique is the \"cast film\" technique, which is used mainly for polymeric materials. The sample is first dissolved in a suitable, non-hygroscopic solvent. A drop of this solution is deposited on the surface of a KBr or NaCl cell. The solution is then evaporated to dryness and the film formed on the cell is analysed directly. Care is important to ensure that the film is not too thick otherwise light cannot pass through. This technique is suitable for qualitative analysis. The final method is to use microtomy to cut a thin (20–100 μm) film from a solid sample. This is one of the most important ways of analysing failed plastic products for example because the integrity of the solid is preserved.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 16, "text": "In photoacoustic spectroscopy the need for sample treatment is minimal. The sample, liquid or solid, is placed into the sample cup which is inserted into the photoacoustic cell which is then sealed for the measurement. The sample may be one solid piece, powder or basically in any form for the measurement. For example, a piece of rock can be inserted into the sample cup and the spectrum measured from it.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 17, "text": "A useful way of analyzing solid samples without the need for cutting samples uses ATR or attenuated total reflectance spectroscopy. Using this approach, samples are pressed against the face of a single crystal. The infrared radiation passes through the crystal and only interacts with the sample at the interface between the two materials.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 18, "text": "It is typical to record spectrum of both the sample and a \"reference\". This step controls for a number of variables, e.g. infrared detector, which may affect the spectrum. The reference measurement makes it possible to eliminate the instrument influence.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 19, "text": "The appropriate \"reference\" depends on the measurement and its goal. The simplest reference measurement is to simply remove the sample (replacing it by air). However, sometimes a different reference is more useful. For example, if the sample is a dilute solute dissolved in water in a beaker, then a good reference measurement might be to measure pure water in the same beaker. Then the reference measurement would cancel out not only all the instrumental properties (like what light source is used), but also the light-absorbing and light-reflecting properties of the water and beaker, and the final result would just show the properties of the solute (at least approximately).", "title": "Practical IR spectroscopy" }, { "paragraph_id": 20, "text": "A common way to compare to a reference is sequentially: first measure the reference, then replace the reference by the sample and measure the sample. This technique is not perfectly reliable; if the infrared lamp is a bit brighter during the reference measurement, then a bit dimmer during the sample measurement, the measurement will be distorted. More elaborate methods, such as a \"two-beam\" setup (see figure), can correct for these types of effects to give very accurate results. The Standard addition method can be used to statistically cancel these errors.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 21, "text": "Nevertheless, among different absorption-based techniques which are used for gaseous species detection, Cavity ring-down spectroscopy (CRDS) can be used as a calibration-free method. The fact that CRDS is based on the measurements of photon life-times (and not the laser intensity) makes it needless for any calibration and comparison with a reference", "title": "Practical IR spectroscopy" }, { "paragraph_id": 22, "text": "Some instruments also automatically identify the substance being measured from a store of thousands of reference spectra held in storage.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 23, "text": "Fourier transform infrared (FTIR) spectroscopy is a measurement technique that allows one to record infrared spectra. Infrared light is guided through an interferometer and then through the sample (or vice versa). A moving mirror inside the apparatus alters the distribution of infrared light that passes through the interferometer. The signal directly recorded, called an \"interferogram\", represents light output as a function of mirror position. A data-processing technique called Fourier transform turns this raw data into the desired result (the sample's spectrum): light output as a function of infrared wavelength (or equivalently, wavenumber). As described above, the sample's spectrum is always compared to a reference.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 24, "text": "An alternate method for acquiring spectra is the \"dispersive\" or \"scanning monochromator\" method. In this approach, the sample is irradiated sequentially with various single wavelengths. The dispersive method is more common in UV-Vis spectroscopy, but is less practical in the infrared than the FTIR method. One reason that FTIR is favored is called \"Fellgett's advantage\" or the \"multiplex advantage\": The information at all frequencies is collected simultaneously, improving both speed and signal-to-noise ratio. Another is called \"Jacquinot's Throughput Advantage\": A dispersive measurement requires detecting much lower light levels than an FTIR measurement. There are other advantages, as well as some disadvantages, but virtually all modern infrared spectrometers are FTIR instruments.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 25, "text": "Various forms of infrared microscopy exist. These include IR versions of sub-diffraction microscopy such as IR NSOM, photothermal microspectroscopy, Nano-FTIR and atomic force microscope based infrared spectroscopy (AFM-IR).", "title": "Practical IR spectroscopy" }, { "paragraph_id": 26, "text": "Infrared spectroscopy is not the only method of studying molecular vibrational spectra. Raman spectroscopy involves an inelastic scattering process in which only part of the energy of an incident photon is absorbed by the molecule, and the remaining part is scattered and detected. The energy difference corresponds to absorbed vibrational energy.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 27, "text": "The selection rules for infrared and for Raman spectroscopy are different at least for some molecular symmetries, so that the two methods are complementary in that they observe vibrations of different symmetries.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 28, "text": "Another method is electron energy loss spectroscopy (EELS), in which the energy absorbed is provided by an inelastically scattered electron rather than a photon. This method is useful for studying vibrations of molecules adsorbed on a solid surface.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 29, "text": "Recently, high-resolution EELS (HREELS) has emerged as a technique for performing vibrational spectroscopy in a transmission electron microscope (TEM). In combination with the high spatial resolution of the TEM, unprecedented experiments have been performed, such as nano-scale temperature measurements, mapping of isotopically labeled molecules, mapping of phonon modes in position- and momentum-space, vibrational surface and bulk mode mapping on nanocubes, and investigations of polariton modes in van der Waals crystals. Analysis of vibrational modes that are IR-inactive but appear in inelastic neutron scattering is also possible at high spatial resolution using EELS. Although the spatial resolution of HREELs is very high, the bands are extremely broad compared to other techniques.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 30, "text": "By using computer simulations and normal mode analysis it is possible to calculate theoretical frequencies of molecules.", "title": "Practical IR spectroscopy" }, { "paragraph_id": 31, "text": "IR spectroscopy is often used to identify structures because functional groups give rise to characteristic bands both in terms of intensity and position (frequency). The positions of these bands are summarized in correlation tables as shown below.", "title": "Absorption bands" }, { "paragraph_id": 32, "text": "A spectrograph is often interpreted as having two regions.", "title": "Absorption bands" }, { "paragraph_id": 33, "text": "In the functional region there are one to a few troughs per functional group.", "title": "Absorption bands" }, { "paragraph_id": 34, "text": "In the fingerprint region there are many troughs which form an intricate pattern which can be used like a fingerprint to determine the compound.", "title": "Absorption bands" }, { "paragraph_id": 35, "text": "For many kinds of samples, the assignments are known, i.e. which bond deformation(s) are associated with which frequency. In such cases further information can be gleaned about the strength on a bond, relying on the empirical guideline called Badger's rule. Originally published by Richard McLean Badger in 1934, this rule states that the strength of a bond (in terms of force constant) correlates with the bond length. That is, increase in bond strength leads to corresponding bond shortening and vice versa.", "title": "Absorption bands" }, { "paragraph_id": 36, "text": "Infrared spectroscopy is a simple and reliable technique widely used in both organic and inorganic chemistry, in research and industry. In catalysis research it is a very useful tool to characterize the catalyst, as well as to detect intermediates and products during the catalytic reaction. It is used in quality control, dynamic measurement, and monitoring applications such as the long-term unattended measurement of CO2 concentrations in greenhouses and growth chambers by infrared gas analyzers.", "title": "Uses and applications" }, { "paragraph_id": 37, "text": "It is also used in forensic analysis in both criminal and civil cases, for example in identifying polymer degradation. It can be used in determining the blood alcohol content of a suspected drunk driver.", "title": "Uses and applications" }, { "paragraph_id": 38, "text": "IR spectroscopy has been successfully used in analysis and identification of pigments in paintings and other art objects such as illuminated manuscripts.", "title": "Uses and applications" }, { "paragraph_id": 39, "text": "Infrared spectroscopy is also useful in measuring the degree of polymerization in polymer manufacture. Changes in the character or quantity of a particular bond are assessed by measuring at a specific frequency over time. Modern research instruments can take infrared measurements across the range of interest as frequently as 32 times a second. This can be done whilst simultaneous measurements are made using other techniques. This makes the observations of chemical reactions and processes quicker and more accurate.", "title": "Uses and applications" }, { "paragraph_id": 40, "text": "Infrared spectroscopy has also been successfully utilized in the field of semiconductor microelectronics: for example, infrared spectroscopy can be applied to semiconductors like silicon, gallium arsenide, gallium nitride, zinc selenide, amorphous silicon, silicon nitride, etc.", "title": "Uses and applications" }, { "paragraph_id": 41, "text": "Another important application of infrared spectroscopy is in the food industry to measure the concentration of various compounds in different food products.", "title": "Uses and applications" }, { "paragraph_id": 42, "text": "The instruments are now small, and can be transported, even for use in field trials.", "title": "Uses and applications" }, { "paragraph_id": 43, "text": "Infrared spectroscopy is also used in gas leak detection devices such as the DP-IR and EyeCGAs. These devices detect hydrocarbon gas leaks in the transportation of natural gas and crude oil.", "title": "Uses and applications" }, { "paragraph_id": 44, "text": "In February 2014, NASA announced a greatly upgraded database, based on IR spectroscopy, for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.", "title": "Uses and applications" }, { "paragraph_id": 45, "text": "Infrared spectroscopy is an important analysis method in the recycling process of household waste plastics, and a convenient stand-off method to sort plastic of different polymers (PET, HDPE, ...).", "title": "Uses and applications" }, { "paragraph_id": 46, "text": "Other developments include a miniature IR-spectrometer that's linked to a cloud based database and suitable for personal everyday use, and NIR-spectroscopic chips that can be embedded in smartphones and various gadgets.", "title": "Uses and applications" }, { "paragraph_id": 47, "text": "The different isotopes in a particular species may exhibit different fine details in infrared spectroscopy. For example, the O–O stretching frequency (in reciprocal centimeters) of oxyhemocyanin is experimentally determined to be 832 and 788 cm for ν(O–O) and ν(O–O), respectively.", "title": "Isotope effects" }, { "paragraph_id": 48, "text": "By considering the O–O bond as a spring, the frequency of absorbance can be calculated as a wavenumber [= frequency/(speed of light)]", "title": "Isotope effects" }, { "paragraph_id": 49, "text": "where k is the spring constant for the bond, c is the speed of light, and μ is the reduced mass of the A–B system:", "title": "Isotope effects" }, { "paragraph_id": 50, "text": "( m i {\\displaystyle m_{i}} is the mass of atom i {\\displaystyle i} ).", "title": "Isotope effects" }, { "paragraph_id": 51, "text": "The reduced masses for O–O and O–O can be approximated as 8 and 9 respectively. Thus", "title": "Isotope effects" }, { "paragraph_id": 52, "text": "The effect of isotopes, both on the vibration and the decay dynamics, has been found to be stronger than previously thought. In some systems, such as silicon and germanium, the decay of the anti-symmetric stretch mode of interstitial oxygen involves the symmetric stretch mode with a strong isotope dependence. For example, it was shown that for a natural silicon sample, the lifetime of the anti-symmetric vibration is 11.4 ps. When the isotope of one of the silicon atoms is increased to Si, the lifetime increases to 19 ps. In similar manner, when the silicon atom is changed to Si, the lifetime becomes 27 ps.", "title": "Isotope effects" }, { "paragraph_id": 53, "text": "Two-dimensional infrared correlation spectroscopy analysis combines multiple samples of infrared spectra to reveal more complex properties. By extending the spectral information of a perturbed sample, spectral analysis is simplified and resolution is enhanced. The 2D synchronous and 2D asynchronous spectra represent a graphical overview of the spectral changes due to a perturbation (such as a changing concentration or changing temperature) as well as the relationship between the spectral changes at two different wavenumbers.", "title": "Two-dimensional IR" }, { "paragraph_id": 54, "text": "Nonlinear two-dimensional infrared spectroscopy is the infrared version of correlation spectroscopy. Nonlinear two-dimensional infrared spectroscopy is a technique that has become available with the development of femtosecond infrared laser pulses. In this experiment, first a set of pump pulses is applied to the sample. This is followed by a waiting time during which the system is allowed to relax. The typical waiting time lasts from zero to several picoseconds, and the duration can be controlled with a resolution of tens of femtoseconds. A probe pulse is then applied, resulting in the emission of a signal from the sample. The nonlinear two-dimensional infrared spectrum is a two-dimensional correlation plot of the frequency ω1 that was excited by the initial pump pulses and the frequency ω3 excited by the probe pulse after the waiting time. This allows the observation of coupling between different vibrational modes; because of its extremely fine time resolution, it can be used to monitor molecular dynamics on a picosecond timescale. It is still a largely unexplored technique and is becoming increasingly popular for fundamental research.", "title": "Two-dimensional IR" }, { "paragraph_id": 55, "text": "As with two-dimensional nuclear magnetic resonance (2DNMR) spectroscopy, this technique spreads the spectrum in two dimensions and allows for the observation of cross peaks that contain information on the coupling between different modes. In contrast to 2DNMR, nonlinear two-dimensional infrared spectroscopy also involves the excitation to overtones. These excitations result in excited state absorption peaks located below the diagonal and cross peaks. In 2DNMR, two distinct techniques, COSY and NOESY, are frequently used. The cross peaks in the first are related to the scalar coupling, while in the latter they are related to the spin transfer between different nuclei. In nonlinear two-dimensional infrared spectroscopy, analogs have been drawn to these 2DNMR techniques. Nonlinear two-dimensional infrared spectroscopy with zero waiting time corresponds to COSY, and nonlinear two-dimensional infrared spectroscopy with finite waiting time allowing vibrational population transfer corresponds to NOESY. The COSY variant of nonlinear two-dimensional infrared spectroscopy has been used for determination of the secondary structure content of proteins.", "title": "Two-dimensional IR" } ]
Infrared spectroscopy is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functional groups in solid, liquid, or gaseous forms. It can be used to characterize new materials or identify and verify known and unknown samples. The method or technique of infrared spectroscopy is conducted with an instrument called an infrared spectrometer which produces an infrared spectrum. An IR spectrum can be visualized in a graph of infrared light absorbance on the vertical axis vs. frequency, wavenumber or wavelength on the horizontal axis. Typical units of wavenumber used in IR spectra are reciprocal centimeters, with the symbol cm−1. Units of IR wavelength are commonly given in micrometers, symbol μm, which are related to the wavenumber in a reciprocal way. A common laboratory instrument that uses this technique is a Fourier transform infrared (FTIR) spectrometer. Two-dimensional IR is also possible as discussed below. The infrared portion of the electromagnetic spectrum is usually divided into three regions; the near-, mid- and far- infrared, named for their relation to the visible spectrum. The higher-energy near-IR, approximately 14,000–4,000 cm−1 can excite overtone or combination modes of molecular vibrations. The mid-infrared, approximately 4,000–400 cm−1 (2.5–25 μm) is generally used to study the fundamental vibrations and associated rotational–vibrational structure. The far-infrared, approximately 400–10 cm−1 (25–1,000 μm) has low energy and may be used for rotational spectroscopy and low frequency vibrations. The region from 2–130 cm−1, bordering the microwave region, is considered the terahertz region and may probe intermolecular vibrations. The names and classifications of these subregions are conventions, and are only loosely based on the relative molecular or electromagnetic properties.
2001-12-19T09:57:53Z
2023-11-05T06:47:47Z
[ "Template:Div col", "Template:Cite encyclopedia", "Template:Webarchive", "Template:Clear", "Template:Diagonal split header", "Template:Cite journal", "Template:Commons category", "Template:BranchesofSpectroscopy", "Template:Main", "Template:Div col end", "Template:Reflist", "Template:Cite book", "Template:Branches of chemistry", "Template:For", "Template:Citation needed", "Template:Analytical chemistry", "Template:Short description", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Infrared_spectroscopy
15,414
Irenaeus
Irenaeus (/ɪrɪˈneɪəs/; Greek: Εἰρηναῖος Eirēnaios; c. 130 – c. 202 AD) was a Greek bishop noted for his role in guiding and expanding Christian communities in the southern regions of present-day France and, more widely, for the development of Christian theology by combating heterodox or Gnostic interpretations of Scripture as heresy and defining proto-orthodoxy. Originating from Smyrna, he had seen and heard the preaching of Polycarp, who in turn was said to have heard John the Evangelist, and thus was the last-known living connection with the Apostles. Chosen as bishop of Lugdunum, now Lyon, his best-known work is Against Heresies, often cited as Adversus Haereses, a refutation of gnosticism, in particular that of Valentinus. To counter the doctrines of the gnostic sects claiming secret wisdom, he offered three pillars of orthodoxy: the scriptures, the tradition handed down from the apostles, and the teaching of the apostles' successors. Intrinsic to his writing is that the surest source of Christian guidance is the Church of Rome, and he is the earliest surviving witness to regard all four of the now-canonical gospels as essential. He is recognized as a saint in the Catholic Church, which celebrates his feast on 28 June, and in the Eastern Orthodox Churches, which celebrates the feast on 23 August. Irenaeus is honored in the Church of England and in the Episcopal Church on 28 June. Pope Francis declared Irenaeus the 37th Doctor of the Church on 21 January 2022. Irenaeus was a Greek from Polycarp's hometown of Smyrna in Asia Minor, now İzmir, Turkey, born during the first half of the 2nd century. The exact date is thought to be between the years 120 and 140. Unlike many of his contemporaries, he was brought up in a Christian family rather than converting as an adult. During the persecution of Christians by Marcus Aurelius, the Roman emperor from 161 to 180, Irenaeus was a priest of the Church of Lyon. The clergy of that city, many of whom were suffering imprisonment for the faith, sent him in 177 to Rome with a letter to Pope Eleutherius concerning the heresy of Montanism, and that occasion bore emphatic testimony to his merits. While Irenaeus was in Rome, a persecution took place in Lyon. Returning to Gaul, Irenaeus succeeded the martyr Saint Pothinus and became the second bishop of Lyon. During the religious peace which followed the persecution by Marcus Aurelius, the new bishop divided his activities between the duties of a pastor and of a missionary (as to which we have but brief data, late and not very certain). Almost all his writings were directed against Gnosticism. The most famous of these writings is Adversus haereses (Against Heresies). Irenaeus alludes to coming across Gnostic writings, and holding conversations with Gnostics, and this may have taken place in Asia Minor or in Rome. However, it also appears that Gnosticism was present near Lyon: he writes that there were followers of 'Marcus the Magician' living and teaching in the Rhone valley. Little is known about the career of Irenaeus after he became bishop. The last action reported of him (by Eusebius, 150 years later) is that in 190 or 191, he exerted influence on Pope Victor I not to excommunicate the Christian communities of Asia Minor which persevered in the practice of the Quartodeciman celebration of Easter. Nothing is known of the date of his death, which must have occurred at the end of the second or the beginning of the third century. He is regarded as a martyr by the Catholic Church and by some within the Orthodox Church. He was buried under the Church of Saint John in Lyon, which was later renamed St Irenaeus in his honour. The tomb and his remains were utterly destroyed in 1562 by the Huguenots. Irenaeus wrote a number of books, but the most important that survives is the Against Heresies (or, in its Latin title, Adversus haereses). In Book I, Irenaeus talks about the Valentinian Gnostics and their predecessors, who he says go as far back as the magician Simon Magus. In Book II he attempts to provide proof that Valentinianism contains no merit in terms of its doctrines. In Book III Irenaeus purports to show that these doctrines are false, by providing counter-evidence gleaned from the Gospels. Book IV consists of Jesus's sayings, and here Irenaeus also stresses the unity of the Old Testament and the Gospel. In the final volume, Book V, Irenaeus focuses on more sayings of Jesus plus the letters of Paul the Apostle. Irenaeus wrote: "One should not seek among others the truth that can be easily gotten from the Church. For in her, as in a rich treasury, the apostles have placed all that pertains to truth, so that everyone can drink this beverage of life. She is the door of life." But he also said, "Christ came not only for those who believed from the time of Tiberius Caesar, nor did the Father provide only for those who are now, but for absolutely all men from the beginning, who, according to their ability, feared and loved God and lived justly. . . and desired to see Christ and to hear His voice." The purpose of "Against Heresies" was to refute the teachings of various Gnostic groups; apparently, several Greek merchants had begun an oratorial campaign in Irenaeus's bishopric, teaching that the material world was the accidental creation of an evil god, from which we are to escape by the pursuit of gnosis. Irenaeus argued that the true gnosis is in fact knowledge of Christ, which redeems rather than escapes from bodily existence. Until the discovery of the Library of Nag Hammadi in 1945, Against Heresies was the best-surviving description of Gnosticism. Some religious scholars have argued the findings at Nag Hammadi have shown Irenaeus's description of Gnosticism to be inaccurate and polemic in nature. However, the general consensus among modern scholars is that Irenaeus was fairly accurate in his transmission of gnostic beliefs, and that the Nag Hammadi texts have raised no substantial challenges to the overall accuracy of Irenaeus's information. Religious historian Elaine Pagels criticizes Irenaeus for describing Gnostic groups as sexual libertines, for example, when some of their own writings advocated chastity more strongly than did orthodox texts. However, the Nag Hammadi texts do not present a single, coherent picture of any unified gnostic system of belief, but rather divergent beliefs of multiple Gnostic sects. Some of these sects were indeed libertine because they considered bodily existence meaningless; others praised chastity, and strongly prohibited any sexual activity, even within marriage. Irenaeus also wrote The Demonstration of the Apostolic Preaching (also known as Proof of the Apostolic Preaching), an Armenian copy of which was discovered in 1904. This work seems to have been an instruction for recent Christian converts. Eusebius attests to other works by Irenaeus, today lost, including On the Ogdoad, an untitled letter to Blastus regarding schism, On the Subject of Knowledge, On the Monarchy or How God is not the Cause of Evil, On Easter. Irenaeus exercised wide influence on the generation which followed. Both Hippolytus and Tertullian freely drew on his writings. However, none of his works aside from Against Heresies and The Demonstration of the Apostolic Preaching survive today, perhaps because his literal hope of an earthly millennium may have made him uncongenial reading in the Greek East. Even though no complete version of Against Heresies in its original Greek exists, we possess the full ancient Latin version, probably of the third century, as well as thirty-three fragments of a Syrian version and a complete Armenian version of books 4 and 5. Evelyn Underhill in her book Mysticism credited Irenaeus as being one of those to whom we owe "the preservation of that mighty system of scaffolding which enabled the Catholic mystics to build up the towers and bulwarks of the City of God." Irenaeus's works were first translated into English by John Keble and published in 1872 as part of the Library of the Fathers series. Irenaeus pointed to the public rule of faith, authoritatively articulated by the preaching of bishops and inculcated in Church practice, especially worship, as an authentic apostolic tradition by which to read Scripture truly against heresies. He classified as Scripture not only the Old Testament but most of the books now known as the New Testament, while excluding many works, a large number by Gnostics, that flourished in the 2nd century and claimed scriptural authority. Oftentimes, Irenaeus, as a student of Polycarp, who was a direct disciple of the Apostle John, believed that he was interpreting scriptures in the same hermeneutic as the Apostles. This connection to Jesus was important to Irenaeus because both he and the Gnostics based their arguments on Scripture. Irenaeus argued that since he could trace his authority to Jesus and the Gnostics could not, his interpretation of Scripture was correct. He also used "the Rule of Faith", a "proto-creed" with similarities to the Apostles' Creed, as a hermeneutical key to argue that his interpretation of Scripture was correct. Before Irenaeus, Christians differed as to which gospel they preferred. The Christians of Asia Minor preferred the Gospel of John. The Gospel of Matthew was the most popular overall. Irenaeus asserted that all four of the Gospels, John, Luke, Matthew, and Mark (which is the order presented in his four pillar narrative in Adversus haereses (Against Heresies) III 11,8), were canonical scripture. Thus Irenaeus provides the earliest witness to the assertion of the four canonical Gospels, possibly in reaction to Marcion's edited version of the Gospel of Luke, which Marcion asserted was the one and only true gospel. Based on the arguments Irenaeus made in support of only four authentic gospels, some interpreters deduce that the fourfold Gospel must have still been a novelty in Irenaeus's time. Against Heresies 3.11.7 acknowledges that many heterodox Christians use only one gospel while 3.11.9 acknowledges that some use more than four. The success of Tatian's Diatessaron in about the same time period is "... a powerful indication that the fourfold Gospel contemporaneously sponsored by Irenaeus was not broadly, let alone universally, recognized." (The apologist and ascetic Tatian had previously harmonized the four gospels into a single narrative, the Diatesseron c. 150–160) Irenaeus is also the earliest attestation that the Gospel of John was written by John the Apostle, and that the Gospel of Luke was written by Luke, the companion of Paul. Scholars contend that Irenaeus quotes from 21 of the 27 New Testament books, such as: He may refer to Hebrews 2:30 and James 4:16 and maybe even 2 Peter 5:28, but does not cite Philemon. Irenaeus cited the New Testament approximately 1,000 times. About one third of his citations are made to Paul's letters. Irenaeus considered all 13 letters belonging to the Pauline corpus to have been written by Paul himself. In his writing against the Gnostics, who claimed to possess a secret oral tradition from Jesus himself, Irenaeus maintained that the bishops in different cities are known as far back as the Apostles and that the oral tradition he lists from the Apostles is a safe guide to the interpretation of Scripture. In a passage that became a locus classicus of Catholic-Protestant polemics, he cited the Roman church as an example of the unbroken chain of authority, which text Catholic polemics would use to assert the primacy of Rome over Eastern churches by virtue of its preeminent authority. The succession of bishops and presbyters was important to establish a chain of custody for orthodoxy. Irenaeus's point when refuting the Gnostics was that all of the Apostolic churches had preserved the same traditions and teachings in many independent streams. It was the unanimous agreement between these many independent streams of transmission that proved the orthodox faith, current in those churches, to be true. The central point of Irenaeus's theology is the unity and the goodness of God, in opposition to the Gnostics' theory of God; a number of divine emanations (Aeons) along with a distinction between the Monad and the Demiurge. Irenaeus uses the Logos theology he inherited from Justin Martyr. Irenaeus was a student of Polycarp, who was said to have been tutored by John the Apostle. (John had used Logos terminology in the Gospel of John and the letter of 1 John). Irenaeus often spoke of the Son and the Spirit as the "hands of God," though he also spoke of the Son as the "Logos." Irenaeus's emphasis on the unity of God is reflected in his corresponding emphasis on the unity of salvation history. Irenaeus repeatedly insists that God began the world and has been overseeing it ever since this creative act; everything that has happened is part of his plan for humanity. The essence of this plan is a process of maturation: Irenaeus believes that humanity was created immature, and God intended his creatures to take a long time to grow into or assume the divine likeness. Everything that has happened since has therefore been planned by God to help humanity overcome this initial mishap and achieve spiritual maturity. The world has been intentionally designed by God as a difficult place, where human beings are forced to make moral decisions, as only in this way can they mature as moral agents. Irenaeus likens death to the big fish that swallowed Jonah: it was only in the depths of the whale's belly that Jonah could turn to God and act according to the divine will. Similarly, death and suffering appear as evils, but without them we could never come to know God. According to Irenaeus, the high point in salvation history is the advent of Jesus. For Irenaeus, the Incarnation of Christ was intended by God before he determined that humanity would be created. Irenaeus develops this idea based on Rom. 5:14, saying "Forinasmuch as He had a pre-existence as a saving Being, it was necessary that what might be saved should also be called into existence, in order that the Being who saves should not exist in vain." Some theologians maintain that Irenaeus believed that Incarnation would have occurred even if humanity had never sinned; but the fact that they did sin determined his role as the savior. Irenaeus sees Christ as the new Adam, who systematically undoes what Adam did: thus, where Adam was disobedient concerning God's edict concerning the fruit of the Tree of Knowledge of Good and Evil, Christ was obedient even to death on the wood of a tree. Irenaeus is the first to draw comparisons between Eve and Mary, contrasting the faithlessness of the former with the faithfulness of the latter. In addition to reversing the wrongs done by Adam, Irenaeus thinks of Christ as "recapitulating" or "summing up" human life. Irenaeus conceives of our salvation as essentially coming about through the incarnation of God as a man. He characterizes the penalty for sin as death and corruption. God, however, is immortal and incorruptible, and simply by becoming united to human nature in Christ he conveys those qualities to us: they spread, as it were, like a benign infection. Irenaeus emphasizes that salvation occurs through Christ's Incarnation, which bestows incorruptibility on humanity, rather than emphasizing His Redemptive death in the crucifixion, although the latter event is an integral part of the former. Part of the process of recapitulation is for Christ to go through every stage of human life, from infancy to old age, and simply by living it, sanctify it with his divinity. Although it is sometimes claimed that Irenaeus believed Christ did not die until he was older than is conventionally portrayed, the bishop of Lyon simply pointed out that because Jesus turned the permissible age for becoming a rabbi (30 years old and above), he recapitulated and sanctified the period between 30 and 50 years old, as per the Jewish custom of periodization on life, and so touches the beginning of old age when one becomes 50 years old. (see Adversus Haereses, book II, chapter 22). In the passage of Adversus Haereses under consideration, Irenaeus is clear that after receiving baptism at the age of thirty, citing Luke 3:23, Gnostics then falsely assert that "He [Jesus] preached only one year reckoning from His baptism," and also, "On completing His thirtieth year He [Jesus] suffered, being in fact still a young man, and who had by no means attained to advanced age." Irenaeus argues against the Gnostics by using scripture to add several years after his baptism by referencing 3 distinctly separate visits to Jerusalem. The first is when Jesus makes wine out of water, he goes up to the Paschal feast-day, after which he withdraws and is found in Samaria. The second is when Jesus goes up to Jerusalem for Passover and cures the paralytic, after which he withdraws over the sea of Tiberias. The third mention is when he travels to Jerusalem, eats the Passover, and suffers on the following day. Irenaeus quotes scripture (John 8:57), to suggest that Jesus ministers while in his 40s. In this passage, Jesus's opponents want to argue that Jesus has not seen Abraham, because Jesus is too young. Jesus's opponents argue that Jesus was not yet 50 years old. Irenaeus argues that if Jesus were in his thirties, his opponents would have argued that he was not yet 40 years old, since that would make him even younger. Irenaeus's argument is that they would not weaken their own argument by adding years to Jesus's age. Irenaeus also writes: "The Elders witness to this, who in Asia conferred with John the Lord's disciple, to the effect that John had delivered these things unto them: for he abode with them until the times of Trajan. And some of them saw not only John, but others also of the Apostles, and had this same account from them, and witness to the aforesaid relation." In Demonstration (74) Irenaeus notes "For Pontius Pilate was governor of Judæa, and he had at that time resentful enmity against Herod the king of the Jews. But then, when Christ was brought to him bound, Pilate sent Him to Herod, giving command to enquire of him, that he might know of a certainty what he should desire concerning Him; making Christ a convenient occasion of reconciliation with the king." Pilate was the prefect of the Roman province of Judaea from AD 26–36. He served under Emperor Tiberius Claudius Nero. Herod Antipas was tetrarch of Galilee and Perea, a client state of the Roman Empire. He ruled from 4 BC to 39 AD. In refuting Gnostic claims that Jesus preached for only one year after his baptism, Irenaeus used the "recapitulation" approach to demonstrate that by living beyond the age of thirty Christ sanctified even old age. Many aspects of Irenaeus's presentation of salvation history depend on Paul's Epistles. Irenaeus's conception of salvation relies heavily on the understanding found in Paul's letters. Irenaeus first brings up the theme of victory over sin and evil that is afforded by Jesus's death. God's intervention has saved humanity from the Fall of Adam and the wickedness of Satan. Human nature has become joined with God's in the person of Jesus, thus allowing human nature to have victory over sin. Paul writes on the same theme, that Christ has come so that a new order is formed, and being under the Law, is being under the sin of Adam. Reconciliation is also a theme of Paul's that Irenaeus stresses in his teachings on Salvation. Irenaeus believes Jesus coming in flesh and blood sanctified humanity so that it might again reflect the perfection associated with the likeness of the Divine. This perfection leads to a new life, in the lineage of God, which is forever striving for eternal life and unity with the Father. This is a carryover from Paul, who attributes this reconciliation to the actions of Christ: "For since death came through a human being, the resurrection of the dead has also come through a human being; for as all die in Adam, so all will be made alive in Christ". A third theme in both Paul's and Irenaeus's conceptions of salvation is the sacrifice of Christ being necessary for the new life given to humanity in the triumph over evil. It is in this obedient sacrifice that Jesus is victor and reconciler, thus erasing the marks that Adam left on human nature. To argue against the Gnostics on this point, Irenaeus uses Colossians in showing that the debt which came by a tree has been paid for us in another tree. Furthermore, the first chapter of Ephesians is picked up in Irenaeus's discussion of the topic when he asserts, "By His own blood He redeemed us, as also His apostle declares, 'In whom we have redemption through His blood, even the remission of sins.'" The frequencies of quotations and allusions to the Pauline Epistles in Against Heresies are: To counter his Gnostic opponents, Irenaeus significantly develops Paul's presentation of Christ as the Last Adam. Irenaeus's presentation of Christ as the New Adam is based on Paul's Christ-Adam parallel in Romans 5:12–21. Irenaeus uses this parallel to demonstrate that Christ truly took human flesh. Irenaeus considered it important to emphasize this point because he understands the failure to recognize Christ's full humanity the bond linking the various strains of Gnosticism together, as seen in his statement that "according to the opinion of no one of the heretics was the Word of God made flesh." Irenaeus believes that unless the Word became flesh, humans were not fully redeemed. He explains that by becoming man, Christ restored humanity to being in the image and likeness of God, which they had lost in the Fall of man. Just as Adam was the original head of humanity through whom all sinned, Christ is the new head of humanity who fulfills Adam's role in the Economy of Salvation. Irenaeus calls this process of restoring humanity recapitulation. For Irenaeus, Paul's presentation of the Old Law (the Mosaic covenant) in this passage indicates that the Old Law revealed humanity's sinfulness but could not save them. He explains that "For as the law was spiritual, it merely made sin to stand out in relief, but did not destroy it. For sin had no dominion over the spirit, but over man." Since humans have a physical nature, they cannot be saved by a spiritual law. Instead, they need a human Savior. This is why it was necessary for Christ to take human flesh. Irenaeus summarizes how Christ's taking human flesh saves humanity with a statement that closely resembles Romans 5:19, "For as by the disobedience of the one man who was originally moulded from virgin soil, the many were made sinners, and forfeited life; so was it necessary that, by the obedience of one man, who was originally born from a virgin, many should be justified and receive salvation." The physical creation of Adam and Christ is emphasized by Irenaeus to demonstrate how the Incarnation saves humanity's physical nature. Irenaeus emphasizes the importance of Christ's reversal of Adam's action. Through His obedience, Christ undoes Adam's disobedience. Irenaeus presents the Passion as the climax of Christ's obedience, emphasizing how this obedience on the tree of the Cross undoes the disobedience that occurred through a tree. Irenaeus's interpretation of Paul's discussion of Christ as the New Adam is significant because it helped develop the recapitulation theory of atonement. Irenaeus emphasizes that it is through Christ's reversal of Adam's action that humanity is saved, rather than considering the Redemption to occur in a cultic or juridical way. The biblical passage, "Death has been swallowed up in victory", implied for Irenaeus that the Lord will surely resurrect the first human, i.e. Adam, as one of the saved. According to Irenaeus, those who deny Adam's salvation are “shutting themselves out from life for ever” and the first one who did so was Tatian. The notion that the Second Adam saved the first Adam was advocated not only by Irenaeus, but also by Gregory Thaumaturgus, which suggests that it was popular in the Early Church. Valentinian Gnosticism was one of the major forms of Gnosticism that Irenaeus opposed. According to the Gnostic view of Salvation, creation was perfect to begin with; it did not need time to grow and mature. For the Valentinians, the material world is the result of the loss of perfection which resulted from Sophia's desire to understand the Forefather. Therefore, one is ultimately redeemed, through secret knowledge, to enter the pleroma of which the Achamoth originally fell. According to the Valentinian Gnostics, there are three classes of human beings. They are the material, who cannot attain salvation; the psychic, who are strengthened by works and faith (they are part of the church); and the spiritual, who cannot decay or be harmed by material actions. Essentially, ordinary humans—those who have faith but do not possess the special knowledge—will not attain salvation. Spirituals, on the other hand—those who obtain this great gift—are the only class that will eventually attain salvation. In his article entitled "The Demiurge", J.P. Arendzen sums up the Valentinian view of the salvation of man. He writes, "The first, or carnal men, will return to the grossness of matter and finally be consumed by fire; the second, or psychic men, together with the Demiurge as their master, will enter a middle state, neither heaven (pleroma) nor hell (whyle); the purely spiritual men will be completely freed from the influence of the Demiurge and together with the Saviour and Achamoth, his spouse, will enter the pleroma divested of body (húle) and soul (psuché)." In this understanding of salvation, the purpose of the Incarnation was to redeem the Spirituals from their material bodies. By taking a material body, the Son becomes the Savior and facilitates this entrance into the pleroma by making it possible for the Spirituals to receive his spiritual body. However, in becoming a body and soul, the Son Himself becomes one of those needing redemption. Therefore, the Word descends onto the Savior at His Baptism in the Jordan, which liberates the Son from his corruptible body and soul. His redemption from the body and soul is then applied to the Spirituals. In response to this Gnostic view of Christ, Irenaeus emphasized that the Word became flesh and developed a soteriology that emphasized the significance of Christ's material Body in saving humanity, as discussed in the sections above. In his criticism of Gnosticism, Irenaeus made reference to a Gnostic gospel which portrayed Judas in a positive light, as having acted in accordance with Jesus's instructions. The recently discovered Gospel of Judas dates close to the period when Irenaeus lived (late 2nd century), and scholars typically regard this work as one of many Gnostic texts, showing one of many varieties of Gnostic beliefs of the period. Irenaeus took part in the Quartodeciman Controversy. When Victor I of Rome tried to force a universal practice of feasting until Easter to supersede the Jewish practice and prevent Christians from partaking of the Passover, Polycrates who led the Churches of Asia Minor continued to hold old traditions of the paschal feast. For this reason Victor I wanted to excommunicate Polycrates and his supporters, but this was a step too far for Irenaeus and other bishops.
[ { "paragraph_id": 0, "text": "Irenaeus (/ɪrɪˈneɪəs/; Greek: Εἰρηναῖος Eirēnaios; c. 130 – c. 202 AD) was a Greek bishop noted for his role in guiding and expanding Christian communities in the southern regions of present-day France and, more widely, for the development of Christian theology by combating heterodox or Gnostic interpretations of Scripture as heresy and defining proto-orthodoxy. Originating from Smyrna, he had seen and heard the preaching of Polycarp, who in turn was said to have heard John the Evangelist, and thus was the last-known living connection with the Apostles.", "title": "" }, { "paragraph_id": 1, "text": "Chosen as bishop of Lugdunum, now Lyon, his best-known work is Against Heresies, often cited as Adversus Haereses, a refutation of gnosticism, in particular that of Valentinus. To counter the doctrines of the gnostic sects claiming secret wisdom, he offered three pillars of orthodoxy: the scriptures, the tradition handed down from the apostles, and the teaching of the apostles' successors. Intrinsic to his writing is that the surest source of Christian guidance is the Church of Rome, and he is the earliest surviving witness to regard all four of the now-canonical gospels as essential.", "title": "" }, { "paragraph_id": 2, "text": "He is recognized as a saint in the Catholic Church, which celebrates his feast on 28 June, and in the Eastern Orthodox Churches, which celebrates the feast on 23 August.", "title": "" }, { "paragraph_id": 3, "text": "Irenaeus is honored in the Church of England and in the Episcopal Church on 28 June. Pope Francis declared Irenaeus the 37th Doctor of the Church on 21 January 2022.", "title": "" }, { "paragraph_id": 4, "text": "Irenaeus was a Greek from Polycarp's hometown of Smyrna in Asia Minor, now İzmir, Turkey, born during the first half of the 2nd century. The exact date is thought to be between the years 120 and 140. Unlike many of his contemporaries, he was brought up in a Christian family rather than converting as an adult.", "title": "Biography" }, { "paragraph_id": 5, "text": "During the persecution of Christians by Marcus Aurelius, the Roman emperor from 161 to 180, Irenaeus was a priest of the Church of Lyon. The clergy of that city, many of whom were suffering imprisonment for the faith, sent him in 177 to Rome with a letter to Pope Eleutherius concerning the heresy of Montanism, and that occasion bore emphatic testimony to his merits. While Irenaeus was in Rome, a persecution took place in Lyon. Returning to Gaul, Irenaeus succeeded the martyr Saint Pothinus and became the second bishop of Lyon.", "title": "Biography" }, { "paragraph_id": 6, "text": "During the religious peace which followed the persecution by Marcus Aurelius, the new bishop divided his activities between the duties of a pastor and of a missionary (as to which we have but brief data, late and not very certain). Almost all his writings were directed against Gnosticism. The most famous of these writings is Adversus haereses (Against Heresies). Irenaeus alludes to coming across Gnostic writings, and holding conversations with Gnostics, and this may have taken place in Asia Minor or in Rome. However, it also appears that Gnosticism was present near Lyon: he writes that there were followers of 'Marcus the Magician' living and teaching in the Rhone valley.", "title": "Biography" }, { "paragraph_id": 7, "text": "Little is known about the career of Irenaeus after he became bishop. The last action reported of him (by Eusebius, 150 years later) is that in 190 or 191, he exerted influence on Pope Victor I not to excommunicate the Christian communities of Asia Minor which persevered in the practice of the Quartodeciman celebration of Easter.", "title": "Biography" }, { "paragraph_id": 8, "text": "Nothing is known of the date of his death, which must have occurred at the end of the second or the beginning of the third century. He is regarded as a martyr by the Catholic Church and by some within the Orthodox Church. He was buried under the Church of Saint John in Lyon, which was later renamed St Irenaeus in his honour. The tomb and his remains were utterly destroyed in 1562 by the Huguenots.", "title": "Biography" }, { "paragraph_id": 9, "text": "Irenaeus wrote a number of books, but the most important that survives is the Against Heresies (or, in its Latin title, Adversus haereses). In Book I, Irenaeus talks about the Valentinian Gnostics and their predecessors, who he says go as far back as the magician Simon Magus. In Book II he attempts to provide proof that Valentinianism contains no merit in terms of its doctrines. In Book III Irenaeus purports to show that these doctrines are false, by providing counter-evidence gleaned from the Gospels. Book IV consists of Jesus's sayings, and here Irenaeus also stresses the unity of the Old Testament and the Gospel. In the final volume, Book V, Irenaeus focuses on more sayings of Jesus plus the letters of Paul the Apostle.", "title": "Writings" }, { "paragraph_id": 10, "text": "Irenaeus wrote: \"One should not seek among others the truth that can be easily gotten from the Church. For in her, as in a rich treasury, the apostles have placed all that pertains to truth, so that everyone can drink this beverage of life. She is the door of life.\" But he also said, \"Christ came not only for those who believed from the time of Tiberius Caesar, nor did the Father provide only for those who are now, but for absolutely all men from the beginning, who, according to their ability, feared and loved God and lived justly. . . and desired to see Christ and to hear His voice.\"", "title": "Writings" }, { "paragraph_id": 11, "text": "The purpose of \"Against Heresies\" was to refute the teachings of various Gnostic groups; apparently, several Greek merchants had begun an oratorial campaign in Irenaeus's bishopric, teaching that the material world was the accidental creation of an evil god, from which we are to escape by the pursuit of gnosis. Irenaeus argued that the true gnosis is in fact knowledge of Christ, which redeems rather than escapes from bodily existence.", "title": "Writings" }, { "paragraph_id": 12, "text": "Until the discovery of the Library of Nag Hammadi in 1945, Against Heresies was the best-surviving description of Gnosticism. Some religious scholars have argued the findings at Nag Hammadi have shown Irenaeus's description of Gnosticism to be inaccurate and polemic in nature. However, the general consensus among modern scholars is that Irenaeus was fairly accurate in his transmission of gnostic beliefs, and that the Nag Hammadi texts have raised no substantial challenges to the overall accuracy of Irenaeus's information. Religious historian Elaine Pagels criticizes Irenaeus for describing Gnostic groups as sexual libertines, for example, when some of their own writings advocated chastity more strongly than did orthodox texts. However, the Nag Hammadi texts do not present a single, coherent picture of any unified gnostic system of belief, but rather divergent beliefs of multiple Gnostic sects. Some of these sects were indeed libertine because they considered bodily existence meaningless; others praised chastity, and strongly prohibited any sexual activity, even within marriage.", "title": "Writings" }, { "paragraph_id": 13, "text": "Irenaeus also wrote The Demonstration of the Apostolic Preaching (also known as Proof of the Apostolic Preaching), an Armenian copy of which was discovered in 1904. This work seems to have been an instruction for recent Christian converts.", "title": "Writings" }, { "paragraph_id": 14, "text": "Eusebius attests to other works by Irenaeus, today lost, including On the Ogdoad, an untitled letter to Blastus regarding schism, On the Subject of Knowledge, On the Monarchy or How God is not the Cause of Evil, On Easter.", "title": "Writings" }, { "paragraph_id": 15, "text": "Irenaeus exercised wide influence on the generation which followed. Both Hippolytus and Tertullian freely drew on his writings. However, none of his works aside from Against Heresies and The Demonstration of the Apostolic Preaching survive today, perhaps because his literal hope of an earthly millennium may have made him uncongenial reading in the Greek East. Even though no complete version of Against Heresies in its original Greek exists, we possess the full ancient Latin version, probably of the third century, as well as thirty-three fragments of a Syrian version and a complete Armenian version of books 4 and 5.", "title": "Writings" }, { "paragraph_id": 16, "text": "Evelyn Underhill in her book Mysticism credited Irenaeus as being one of those to whom we owe \"the preservation of that mighty system of scaffolding which enabled the Catholic mystics to build up the towers and bulwarks of the City of God.\"", "title": "Writings" }, { "paragraph_id": 17, "text": "Irenaeus's works were first translated into English by John Keble and published in 1872 as part of the Library of the Fathers series.", "title": "Writings" }, { "paragraph_id": 18, "text": "Irenaeus pointed to the public rule of faith, authoritatively articulated by the preaching of bishops and inculcated in Church practice, especially worship, as an authentic apostolic tradition by which to read Scripture truly against heresies. He classified as Scripture not only the Old Testament but most of the books now known as the New Testament, while excluding many works, a large number by Gnostics, that flourished in the 2nd century and claimed scriptural authority. Oftentimes, Irenaeus, as a student of Polycarp, who was a direct disciple of the Apostle John, believed that he was interpreting scriptures in the same hermeneutic as the Apostles. This connection to Jesus was important to Irenaeus because both he and the Gnostics based their arguments on Scripture. Irenaeus argued that since he could trace his authority to Jesus and the Gnostics could not, his interpretation of Scripture was correct. He also used \"the Rule of Faith\", a \"proto-creed\" with similarities to the Apostles' Creed, as a hermeneutical key to argue that his interpretation of Scripture was correct.", "title": "Scripture" }, { "paragraph_id": 19, "text": "Before Irenaeus, Christians differed as to which gospel they preferred. The Christians of Asia Minor preferred the Gospel of John. The Gospel of Matthew was the most popular overall. Irenaeus asserted that all four of the Gospels, John, Luke, Matthew, and Mark (which is the order presented in his four pillar narrative in Adversus haereses (Against Heresies) III 11,8), were canonical scripture. Thus Irenaeus provides the earliest witness to the assertion of the four canonical Gospels, possibly in reaction to Marcion's edited version of the Gospel of Luke, which Marcion asserted was the one and only true gospel.", "title": "Scripture" }, { "paragraph_id": 20, "text": "Based on the arguments Irenaeus made in support of only four authentic gospels, some interpreters deduce that the fourfold Gospel must have still been a novelty in Irenaeus's time. Against Heresies 3.11.7 acknowledges that many heterodox Christians use only one gospel while 3.11.9 acknowledges that some use more than four. The success of Tatian's Diatessaron in about the same time period is \"... a powerful indication that the fourfold Gospel contemporaneously sponsored by Irenaeus was not broadly, let alone universally, recognized.\" (The apologist and ascetic Tatian had previously harmonized the four gospels into a single narrative, the Diatesseron c. 150–160)", "title": "Scripture" }, { "paragraph_id": 21, "text": "Irenaeus is also the earliest attestation that the Gospel of John was written by John the Apostle, and that the Gospel of Luke was written by Luke, the companion of Paul.", "title": "Scripture" }, { "paragraph_id": 22, "text": "Scholars contend that Irenaeus quotes from 21 of the 27 New Testament books, such as:", "title": "Scripture" }, { "paragraph_id": 23, "text": "He may refer to Hebrews 2:30 and James 4:16 and maybe even 2 Peter 5:28, but does not cite Philemon.", "title": "Scripture" }, { "paragraph_id": 24, "text": "Irenaeus cited the New Testament approximately 1,000 times. About one third of his citations are made to Paul's letters. Irenaeus considered all 13 letters belonging to the Pauline corpus to have been written by Paul himself.", "title": "Scripture" }, { "paragraph_id": 25, "text": "In his writing against the Gnostics, who claimed to possess a secret oral tradition from Jesus himself, Irenaeus maintained that the bishops in different cities are known as far back as the Apostles and that the oral tradition he lists from the Apostles is a safe guide to the interpretation of Scripture. In a passage that became a locus classicus of Catholic-Protestant polemics, he cited the Roman church as an example of the unbroken chain of authority, which text Catholic polemics would use to assert the primacy of Rome over Eastern churches by virtue of its preeminent authority. The succession of bishops and presbyters was important to establish a chain of custody for orthodoxy.", "title": "Apostolic authority" }, { "paragraph_id": 26, "text": "Irenaeus's point when refuting the Gnostics was that all of the Apostolic churches had preserved the same traditions and teachings in many independent streams. It was the unanimous agreement between these many independent streams of transmission that proved the orthodox faith, current in those churches, to be true.", "title": "Apostolic authority" }, { "paragraph_id": 27, "text": "The central point of Irenaeus's theology is the unity and the goodness of God, in opposition to the Gnostics' theory of God; a number of divine emanations (Aeons) along with a distinction between the Monad and the Demiurge. Irenaeus uses the Logos theology he inherited from Justin Martyr. Irenaeus was a student of Polycarp, who was said to have been tutored by John the Apostle. (John had used Logos terminology in the Gospel of John and the letter of 1 John). Irenaeus often spoke of the Son and the Spirit as the \"hands of God,\" though he also spoke of the Son as the \"Logos.\"", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 28, "text": "Irenaeus's emphasis on the unity of God is reflected in his corresponding emphasis on the unity of salvation history. Irenaeus repeatedly insists that God began the world and has been overseeing it ever since this creative act; everything that has happened is part of his plan for humanity. The essence of this plan is a process of maturation: Irenaeus believes that humanity was created immature, and God intended his creatures to take a long time to grow into or assume the divine likeness.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 29, "text": "Everything that has happened since has therefore been planned by God to help humanity overcome this initial mishap and achieve spiritual maturity. The world has been intentionally designed by God as a difficult place, where human beings are forced to make moral decisions, as only in this way can they mature as moral agents. Irenaeus likens death to the big fish that swallowed Jonah: it was only in the depths of the whale's belly that Jonah could turn to God and act according to the divine will. Similarly, death and suffering appear as evils, but without them we could never come to know God.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 30, "text": "According to Irenaeus, the high point in salvation history is the advent of Jesus. For Irenaeus, the Incarnation of Christ was intended by God before he determined that humanity would be created. Irenaeus develops this idea based on Rom. 5:14, saying \"Forinasmuch as He had a pre-existence as a saving Being, it was necessary that what might be saved should also be called into existence, in order that the Being who saves should not exist in vain.\" Some theologians maintain that Irenaeus believed that Incarnation would have occurred even if humanity had never sinned; but the fact that they did sin determined his role as the savior.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 31, "text": "Irenaeus sees Christ as the new Adam, who systematically undoes what Adam did: thus, where Adam was disobedient concerning God's edict concerning the fruit of the Tree of Knowledge of Good and Evil, Christ was obedient even to death on the wood of a tree. Irenaeus is the first to draw comparisons between Eve and Mary, contrasting the faithlessness of the former with the faithfulness of the latter. In addition to reversing the wrongs done by Adam, Irenaeus thinks of Christ as \"recapitulating\" or \"summing up\" human life.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 32, "text": "Irenaeus conceives of our salvation as essentially coming about through the incarnation of God as a man. He characterizes the penalty for sin as death and corruption. God, however, is immortal and incorruptible, and simply by becoming united to human nature in Christ he conveys those qualities to us: they spread, as it were, like a benign infection. Irenaeus emphasizes that salvation occurs through Christ's Incarnation, which bestows incorruptibility on humanity, rather than emphasizing His Redemptive death in the crucifixion, although the latter event is an integral part of the former.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 33, "text": "Part of the process of recapitulation is for Christ to go through every stage of human life, from infancy to old age, and simply by living it, sanctify it with his divinity. Although it is sometimes claimed that Irenaeus believed Christ did not die until he was older than is conventionally portrayed, the bishop of Lyon simply pointed out that because Jesus turned the permissible age for becoming a rabbi (30 years old and above), he recapitulated and sanctified the period between 30 and 50 years old, as per the Jewish custom of periodization on life, and so touches the beginning of old age when one becomes 50 years old. (see Adversus Haereses, book II, chapter 22).", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 34, "text": "In the passage of Adversus Haereses under consideration, Irenaeus is clear that after receiving baptism at the age of thirty, citing Luke 3:23, Gnostics then falsely assert that \"He [Jesus] preached only one year reckoning from His baptism,\" and also, \"On completing His thirtieth year He [Jesus] suffered, being in fact still a young man, and who had by no means attained to advanced age.\" Irenaeus argues against the Gnostics by using scripture to add several years after his baptism by referencing 3 distinctly separate visits to Jerusalem. The first is when Jesus makes wine out of water, he goes up to the Paschal feast-day, after which he withdraws and is found in Samaria. The second is when Jesus goes up to Jerusalem for Passover and cures the paralytic, after which he withdraws over the sea of Tiberias. The third mention is when he travels to Jerusalem, eats the Passover, and suffers on the following day.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 35, "text": "Irenaeus quotes scripture (John 8:57), to suggest that Jesus ministers while in his 40s. In this passage, Jesus's opponents want to argue that Jesus has not seen Abraham, because Jesus is too young. Jesus's opponents argue that Jesus was not yet 50 years old. Irenaeus argues that if Jesus were in his thirties, his opponents would have argued that he was not yet 40 years old, since that would make him even younger. Irenaeus's argument is that they would not weaken their own argument by adding years to Jesus's age. Irenaeus also writes: \"The Elders witness to this, who in Asia conferred with John the Lord's disciple, to the effect that John had delivered these things unto them: for he abode with them until the times of Trajan. And some of them saw not only John, but others also of the Apostles, and had this same account from them, and witness to the aforesaid relation.\"", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 36, "text": "In Demonstration (74) Irenaeus notes \"For Pontius Pilate was governor of Judæa, and he had at that time resentful enmity against Herod the king of the Jews. But then, when Christ was brought to him bound, Pilate sent Him to Herod, giving command to enquire of him, that he might know of a certainty what he should desire concerning Him; making Christ a convenient occasion of reconciliation with the king.\" Pilate was the prefect of the Roman province of Judaea from AD 26–36. He served under Emperor Tiberius Claudius Nero. Herod Antipas was tetrarch of Galilee and Perea, a client state of the Roman Empire. He ruled from 4 BC to 39 AD. In refuting Gnostic claims that Jesus preached for only one year after his baptism, Irenaeus used the \"recapitulation\" approach to demonstrate that by living beyond the age of thirty Christ sanctified even old age.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 37, "text": "Many aspects of Irenaeus's presentation of salvation history depend on Paul's Epistles.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 38, "text": "Irenaeus's conception of salvation relies heavily on the understanding found in Paul's letters. Irenaeus first brings up the theme of victory over sin and evil that is afforded by Jesus's death. God's intervention has saved humanity from the Fall of Adam and the wickedness of Satan. Human nature has become joined with God's in the person of Jesus, thus allowing human nature to have victory over sin. Paul writes on the same theme, that Christ has come so that a new order is formed, and being under the Law, is being under the sin of Adam.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 39, "text": "Reconciliation is also a theme of Paul's that Irenaeus stresses in his teachings on Salvation. Irenaeus believes Jesus coming in flesh and blood sanctified humanity so that it might again reflect the perfection associated with the likeness of the Divine. This perfection leads to a new life, in the lineage of God, which is forever striving for eternal life and unity with the Father. This is a carryover from Paul, who attributes this reconciliation to the actions of Christ: \"For since death came through a human being, the resurrection of the dead has also come through a human being; for as all die in Adam, so all will be made alive in Christ\".", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 40, "text": "A third theme in both Paul's and Irenaeus's conceptions of salvation is the sacrifice of Christ being necessary for the new life given to humanity in the triumph over evil. It is in this obedient sacrifice that Jesus is victor and reconciler, thus erasing the marks that Adam left on human nature. To argue against the Gnostics on this point, Irenaeus uses Colossians in showing that the debt which came by a tree has been paid for us in another tree. Furthermore, the first chapter of Ephesians is picked up in Irenaeus's discussion of the topic when he asserts, \"By His own blood He redeemed us, as also His apostle declares, 'In whom we have redemption through His blood, even the remission of sins.'\"", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 41, "text": "The frequencies of quotations and allusions to the Pauline Epistles in Against Heresies are:", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 42, "text": "To counter his Gnostic opponents, Irenaeus significantly develops Paul's presentation of Christ as the Last Adam.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 43, "text": "Irenaeus's presentation of Christ as the New Adam is based on Paul's Christ-Adam parallel in Romans 5:12–21. Irenaeus uses this parallel to demonstrate that Christ truly took human flesh. Irenaeus considered it important to emphasize this point because he understands the failure to recognize Christ's full humanity the bond linking the various strains of Gnosticism together, as seen in his statement that \"according to the opinion of no one of the heretics was the Word of God made flesh.\" Irenaeus believes that unless the Word became flesh, humans were not fully redeemed. He explains that by becoming man, Christ restored humanity to being in the image and likeness of God, which they had lost in the Fall of man. Just as Adam was the original head of humanity through whom all sinned, Christ is the new head of humanity who fulfills Adam's role in the Economy of Salvation. Irenaeus calls this process of restoring humanity recapitulation.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 44, "text": "For Irenaeus, Paul's presentation of the Old Law (the Mosaic covenant) in this passage indicates that the Old Law revealed humanity's sinfulness but could not save them. He explains that \"For as the law was spiritual, it merely made sin to stand out in relief, but did not destroy it. For sin had no dominion over the spirit, but over man.\" Since humans have a physical nature, they cannot be saved by a spiritual law. Instead, they need a human Savior. This is why it was necessary for Christ to take human flesh. Irenaeus summarizes how Christ's taking human flesh saves humanity with a statement that closely resembles Romans 5:19, \"For as by the disobedience of the one man who was originally moulded from virgin soil, the many were made sinners, and forfeited life; so was it necessary that, by the obedience of one man, who was originally born from a virgin, many should be justified and receive salvation.\" The physical creation of Adam and Christ is emphasized by Irenaeus to demonstrate how the Incarnation saves humanity's physical nature.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 45, "text": "Irenaeus emphasizes the importance of Christ's reversal of Adam's action. Through His obedience, Christ undoes Adam's disobedience. Irenaeus presents the Passion as the climax of Christ's obedience, emphasizing how this obedience on the tree of the Cross undoes the disobedience that occurred through a tree.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 46, "text": "Irenaeus's interpretation of Paul's discussion of Christ as the New Adam is significant because it helped develop the recapitulation theory of atonement. Irenaeus emphasizes that it is through Christ's reversal of Adam's action that humanity is saved, rather than considering the Redemption to occur in a cultic or juridical way.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 47, "text": "The biblical passage, \"Death has been swallowed up in victory\", implied for Irenaeus that the Lord will surely resurrect the first human, i.e. Adam, as one of the saved. According to Irenaeus, those who deny Adam's salvation are “shutting themselves out from life for ever” and the first one who did so was Tatian. The notion that the Second Adam saved the first Adam was advocated not only by Irenaeus, but also by Gregory Thaumaturgus, which suggests that it was popular in the Early Church.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 48, "text": "Valentinian Gnosticism was one of the major forms of Gnosticism that Irenaeus opposed.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 49, "text": "According to the Gnostic view of Salvation, creation was perfect to begin with; it did not need time to grow and mature. For the Valentinians, the material world is the result of the loss of perfection which resulted from Sophia's desire to understand the Forefather. Therefore, one is ultimately redeemed, through secret knowledge, to enter the pleroma of which the Achamoth originally fell.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 50, "text": "According to the Valentinian Gnostics, there are three classes of human beings. They are the material, who cannot attain salvation; the psychic, who are strengthened by works and faith (they are part of the church); and the spiritual, who cannot decay or be harmed by material actions. Essentially, ordinary humans—those who have faith but do not possess the special knowledge—will not attain salvation. Spirituals, on the other hand—those who obtain this great gift—are the only class that will eventually attain salvation.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 51, "text": "In his article entitled \"The Demiurge\", J.P. Arendzen sums up the Valentinian view of the salvation of man. He writes, \"The first, or carnal men, will return to the grossness of matter and finally be consumed by fire; the second, or psychic men, together with the Demiurge as their master, will enter a middle state, neither heaven (pleroma) nor hell (whyle); the purely spiritual men will be completely freed from the influence of the Demiurge and together with the Saviour and Achamoth, his spouse, will enter the pleroma divested of body (húle) and soul (psuché).\"", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 52, "text": "In this understanding of salvation, the purpose of the Incarnation was to redeem the Spirituals from their material bodies. By taking a material body, the Son becomes the Savior and facilitates this entrance into the pleroma by making it possible for the Spirituals to receive his spiritual body. However, in becoming a body and soul, the Son Himself becomes one of those needing redemption. Therefore, the Word descends onto the Savior at His Baptism in the Jordan, which liberates the Son from his corruptible body and soul. His redemption from the body and soul is then applied to the Spirituals. In response to this Gnostic view of Christ, Irenaeus emphasized that the Word became flesh and developed a soteriology that emphasized the significance of Christ's material Body in saving humanity, as discussed in the sections above.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 53, "text": "In his criticism of Gnosticism, Irenaeus made reference to a Gnostic gospel which portrayed Judas in a positive light, as having acted in accordance with Jesus's instructions. The recently discovered Gospel of Judas dates close to the period when Irenaeus lived (late 2nd century), and scholars typically regard this work as one of many Gnostic texts, showing one of many varieties of Gnostic beliefs of the period.", "title": "Theology and contrast with Gnosticism" }, { "paragraph_id": 54, "text": "Irenaeus took part in the Quartodeciman Controversy. When Victor I of Rome tried to force a universal practice of feasting until Easter to supersede the Jewish practice and prevent Christians from partaking of the Passover, Polycrates who led the Churches of Asia Minor continued to hold old traditions of the paschal feast. For this reason Victor I wanted to excommunicate Polycrates and his supporters, but this was a step too far for Irenaeus and other bishops.", "title": "Quartodeciman Controversy" } ]
Irenaeus was a Greek bishop noted for his role in guiding and expanding Christian communities in the southern regions of present-day France and, more widely, for the development of Christian theology by combating heterodox or Gnostic interpretations of Scripture as heresy and defining proto-orthodoxy. Originating from Smyrna, he had seen and heard the preaching of Polycarp, who in turn was said to have heard John the Evangelist, and thus was the last-known living connection with the Apostles. Chosen as bishop of Lugdunum, now Lyon, his best-known work is Against Heresies, often cited as Adversus Haereses, a refutation of gnosticism, in particular that of Valentinus. To counter the doctrines of the gnostic sects claiming secret wisdom, he offered three pillars of orthodoxy: the scriptures, the tradition handed down from the apostles, and the teaching of the apostles' successors. Intrinsic to his writing is that the surest source of Christian guidance is the Church of Rome, and he is the earliest surviving witness to regard all four of the now-canonical gospels as essential. He is recognized as a saint in the Catholic Church, which celebrates his feast on 28 June, and in the Eastern Orthodox Churches, which celebrates the feast on 23 August. Irenaeus is honored in the Church of England and in the Episcopal Church on 28 June. Pope Francis declared Irenaeus the 37th Doctor of the Church on 21 January 2022.
2001-12-19T15:24:28Z
2023-12-31T22:51:17Z
[ "Template:Circa", "Template:History of Catholic theology", "Template:Lang-grc-gre", "Template:Citation", "Template:S-rel", "Template:Catholic saints", "Template:Infobox saint", "Template:Bibleref2", "Template:Refend", "Template:S-end", "Template:Webarchive", "Template:Sister project links", "Template:Short description", "Template:Efn", "Template:Primary source inline", "Template:Cite CE1913", "Template:Librivox author", "Template:YouTube", "Template:S-start", "Template:Authority control", "Template:Use dmy dates", "Template:IPAc-en", "Template:Cite book", "Template:Cite web", "Template:Reflist", "Template:Cite journal", "Template:Theodicy", "Template:Sfn", "Template:Citation needed", "Template:Notelist", "Template:Bibleverse", "Template:Refbegin", "Template:Internet Archive author", "Template:Succession box", "Template:Christian History", "Template:Other uses", "Template:Infobox Christian leader", "Template:Catholic philosophy", "Template:See also" ]
https://en.wikipedia.org/wiki/Irenaeus
15,416
Involuntary commitment
Involuntary commitment, civil commitment, or involuntary hospitalization/hospitalisation is a legal process through which an individual who is deemed by a qualified agent to have symptoms of severe mental disorder is detained in a psychiatric hospital (inpatient) where they can be treated involuntarily. This treatment may involve the administration of psychoactive drugs, including involuntary administration. In many jurisdictions, people diagnosed with mental health disorders can also be forced to undergo treatment while in the community; this is sometimes referred to as outpatient commitment and shares legal processes with commitment. Criteria for civil commitment are established by laws which vary between nations. Commitment proceedings often follow a period of emergency hospitalization, during which an individual with acute psychiatric symptoms is confined for a relatively short duration (e.g. 72 hours) in a treatment facility for evaluation and stabilization by mental health professionals who may then determine whether further civil commitment is appropriate or necessary. Civil commitment procedures may take place in a court or only involve physicians. If commitment does not involve a court there is normally an appeal process that does involve the judiciary in some capacity, though potentially through a specialist court. For most jurisdictions, involuntary commitment is applied to individuals believed to be experiencing a mental illness that impairs their ability to reason to such an extent that the agents of the law, state, or courts determine that decisions will be made for the individual under a legal framework. In some jurisdictions, this is a proceeding distinct from being found incompetent. Involuntary commitment is used in some degree for each of the following although different jurisdictions have different criteria. Some jurisdictions limit involuntary treatment to individuals who meet statutory criteria for presenting a danger to self or others. Other jurisdictions have broader criteria. The legal process by which commitment takes place varies between jurisdictions. Some jurisdictions have a formal court hearing where testimony and other evidence may also be submitted and the subject of the hearing is typically entitled to legal counsel and may challenge a commitment order through habeas corpus. Other jurisdictions have delegated these power to physicians, though may provide an appeal process that involves the judiciary but may also involve physicians. For example, in the UK a mental health tribunal consists of a judge, a medical member, and a lay representative. Training is gradually becoming available in mental health first aid to equip community members such as teachers, school administrators, police officers, and medical workers with training in recognizing, and authority in managing, situations where involuntary evaluations of behavior are applicable under law. The extension of first aid training to cover mental health problems and crises is a quite recent development. A mental health first aid training course was developed in Australia in 2001 and has been found to improve assistance provided to persons with an alleged mental illness or mental health crisis. This form of training has now spread to a number of other countries (Canada, Finland, Hong Kong, Ireland, Singapore, Scotland, England, Wales, and the United States). Mental health triage may be used in an emergency room to make a determination about potential risk and apply treatment protocols. Observation is sometimes used to determine whether a person warrants involuntary commitment. It is not always clear on a relatively brief examination whether a person should be committed. Austria, Belgium, Germany, Israel, the Netherlands, Northern Ireland, Russia, Taiwan, Ontario (Canada), and the United States have adopted commitment criteria based on the presumed danger of the defendant to self or to others. People with suicidal thoughts may act on these impulses and harm or kill themselves. People with psychosis are occasionally driven by their delusions or hallucinations to harm themselves or others. Research has found that those with schizophrenia are between 3.4 and 7.4 times more likely to engage in violent behaviour than members of the general public. However, because other confounding factors such as childhood adversity and poverty are correlated with both schizophrenia and violence it can be difficult to determine whether this effect is due to schizophrenia or other factors. In an attempt to avoid these confounding factors, researchers have tried comparing the rates of violence amongst people diagnosed with schizophrenia to their siblings in a similar manner to twin studies. In these studies people with schizophrenia are found to be between 1.3 and 1.8 times more likely to engage in violent behaviour. People with certain types of personality disorders can occasionally present a danger to themselves or others. This concern has found expression in the standards for involuntary commitment in every US state and in other countries as the danger to self or others standard, sometimes supplemented by the requirement that the danger be imminent. In some jurisdictions, the danger to self or others standard has been broadened in recent years to include need-for-treatment criteria such as "gravely disabled". Starting in the 1960s, there has been a worldwide trend toward moving psychiatric patients from hospital settings to less restricting settings in the community, a shift known as "deinstitutionalization". Because the shift was typically not accompanied by a commensurate development of community-based services, critics say that deinstitutionalization has led to large numbers of people who would once have been inpatients as instead being incarcerated or becoming homeless. In some jurisdictions, laws authorizing court-ordered outpatient treatment have been passed in an effort to compel individuals with chronic, untreated severe mental illness to take psychiatric medication while living outside the hospital (e.g. Laura's Law, Kendra's Law). In a study of 269 patients from Vermont State Hospital done by Courtenay M. Harding and associates, about two-thirds of the ex-patients did well after deinstitutionalization. In 1838, France enacted a law to regulate both the admissions into asylums and asylum services across the country. Édouard Séguin developed a systematic approach for training individuals with mental deficiencies, and, in 1839, he opened the first school for intellectually disabled people. His method of treatment was based on the idea that intellectually disabled people did not suffer from disease. In the United Kingdom, provision for the care of the mentally ill began in the early 19th century with a state-led effort. Public mental asylums were established in Britain after the passing of the 1808 County Asylums Act. This empowered magistrates to build rate-supported asylums in every county to house the many "pauper lunatics". Nine counties first applied, and the first public asylum opened in 1812 in Nottinghamshire. Parliamentary Committees were established to investigate abuses at private madhouses like Bethlem Hospital - its officers were eventually dismissed and national attention was focused on the routine use of bars, chains and handcuffs and the filthy conditions in which the inmates lived. However, it was not until 1828 that the newly appointed Commissioners in Lunacy were empowered to license and supervise private asylums. The Lunacy Act 1845 was a landmark in the treatment of the mentally ill, as it explicitly changed the status of mentally ill people to patients who required treatment. The Act created the Lunacy Commission, headed by Lord Shaftesbury, focusing on reform of the legislation concerning lunacy. The commission consisted of eleven Metropolitan Commissioners who were required to carry out the provisions of the Act; the compulsory construction of asylums in every county, with regular inspections on behalf of the Home Secretary. All asylums were required to have written regulations and to have a resident qualified physician. A national body for asylum superintendents - the Medico-Psychological Association - was established in 1866 under the Presidency of William A. F. Browne, although the body appeared in an earlier form in 1841. At the turn of the century, England and France combined had only a few hundred individuals in asylums. By the late 1890s and early 1900s, those so detained had risen to the hundreds of thousands. However, the idea that mental illness could be ameliorated through institutionalization was soon disappointed. Psychiatrists were pressured by an ever-increasing patient population. The average number of patients in asylums kept increasing. Asylums were quickly becoming almost indistinguishable from custodial institutions, and the reputation of psychiatry in the medical world had was at an extreme low. In the United States, the erection of state asylums began with the first law for the creation of one in New York, passed in 1842. The Utica State Hospital was opened approximately in 1850. The creation of this hospital, as of many others, was largely the work of Dorothea Lynde Dix, whose philanthropic efforts extended over many states, and in Europe as far as Constantinople. Many state hospitals in the United States were built in the 1850s and 1860s on the Kirkbride Plan, an architectural style meant to have curative effect. In the United States and most other developed societies, severe restrictions have been placed on the circumstances under which a person may be committed or treated against their will as such actions have been ruled by the United States Supreme Court and other national legislative bodies as a violation of civil rights and/or human rights. The Supreme Court case O'Connor v. Donaldson established that the mere presence of mental illness and the necessity for treatment are not sufficient by themselves to justify involuntary commitment, if the patient is capable of surviving in freedom and does not present a danger of harm to themselves or others. Criteria for involuntary commitment are generally set by the individual states, and often have both short- and long-term types of commitment. Short-term commitment tends to be a few days or less, requiring an examination by a medical professional, while longer-term commitment typically requires a court hearing, or sentencing as part of a criminal trial. Indefinite commitment is rare and is usually reserved for individuals who are violent or present an ongoing danger to themselves and others. New York City officials under several administrations have implemented programs involving the involuntary hospitalization of people with mental illnesses in the city. Some of these policies have involved reinterpreting the standard of "harm to themselves or others" to include neglecting their own well-being or posing a harm to themselves or others in the future. In 1987–88, a homeless woman named Joyce Brown worked with the New York Civil Liberties Union to challenge her forced hospitalization under a new Mayor Ed Koch administration program. The trial, which attracted significant media attention, ended in her favor, and while the city won on appeal she was ultimately released after a subsequent case determined she could not be forcibly medicated. In 2022, Mayor Eric Adams announced a similar compulsory hospitalization program, relying on similar legal interpretations. Historically, until the mid-1960s in most jurisdictions in the United States, all committals to public psychiatric facilities and most committals to private ones were involuntary. Since then, there have been alternating trends towards the abolition or substantial reduction of involuntary commitment, a trend known as deinstitutionalisation. In many currents, individuals can voluntarily admit themselves to a mental health hospital and may have more rights than those who are involuntarily committed. This practice is referred to as voluntary commitment. In the United States, Kansas v. Hendricks established the procedures for a long-term or indefinite form of commitment applicable to people convicted of some sexual offences. United Nations General Assembly Resolution 46/119, "Principles for the Protection of Persons with Mental Illness and the Improvement of Mental Health Care", is a non-binding resolution advocating certain broadly drawn procedures for the carrying out of involuntary commitment. These principles have been used in many countries where local laws have been revised or new ones implemented. The UN runs programs in some countries to assist in this process. The potential dangers of institutions have been noted and criticized by reformers/activists almost since their foundation. Charles Dickens was an outspoken and high-profile early critic, and several of his novels, in particular Oliver Twist and Hard Times demonstrate his insight into the damage that institutions can do to human beings. Enoch Powell, when Minister for Health in the early 1960s, was a later opponent who was appalled by what he witnessed on his visits to the asylums, and his famous "water tower" speech in 1961 called for the closure of all NHS asylums and their replacement by wards in general hospitals: "There they stand, isolated, majestic, imperious, brooded over by the gigantic water-tower and chimney combined, rising unmistakable and daunting out of the countryside - the asylums which our forefathers built with such immense solidity to express the notions of their day. Do not for a moment underestimate their powers of resistance to our assault. Let me describe some of the defenses which we have to storm." Scandal after scandal followed, with many high-profile public inquiries. These involved the exposure of abuses such as unscientific surgical techniques such as lobotomy and the widespread neglect and abuse of vulnerable patients in the US and Europe. The growing anti-psychiatry movement in the 1960s and 1970s led in Italy to the first successful legislative challenge to the authority of the mental institutions, culminating in their closure. During the 1970s and 1990s the hospital population started to fall rapidly, mainly because of the deaths of long-term inmates. Significant efforts were made to re-house large numbers of former residents in a variety of suitable or otherwise alternative accommodation. The first 1,000+ bed hospital to close was Darenth Park Hospital in Kent, swiftly followed by many more across the UK. The haste of these closures, driven by the Conservative governments led by Margaret Thatcher and John Major, led to considerable criticism in the press, as some individuals slipped through the net into homelessness or were discharged to poor quality private sector mini-institutions. There are instances in which mental health professionals have wrongfully deemed individuals to have been displaying the symptoms of a mental disorder, and committed the individual for treatment in a psychiatric hospital upon such grounds. Involuntary commitment is undertaken when the information provided to the doctor during the period of evaluation appears to show that the patient is a danger to themselves or others. This can be challenging in some scenarios, when there isn't enough collateral information to disprove the claims erratic or dangerous behaviors. Therefore, claims of wrongful commitment are a common theme in the anti-psychiatry movement. In 1860, the case of Elizabeth Packard, who was wrongfully committed that year and filed a lawsuit and won thereafter, highlighted the issue of wrongful involuntary commitment. In 1887, investigative journalist Nellie Bly went undercover at an asylum in New York City to expose the terrible conditions that mental patients at the time had to deal with. She published her findings and experiences as articles in New York World, and later made the articles into one book called Ten Days in a Mad-House. In the first half of the twentieth century there were a few high-profile cases of wrongful commitment based on racism or punishment for political dissenters. In the former Soviet Union, psychiatric hospitals were used as prisons to isolate political prisoners from the rest of society. British playwright Tom Stoppard wrote Every Good Boy Deserves Favour about the relationship between a patient and his doctor in one of these hospitals. Stoppard was inspired by a meeting with a Russian exile. In 1927, after the execution of Sacco and Vanzetti in the United States, demonstrator Aurora D'Angelo was sent to a mental health facility for psychiatric evaluation after she participated in a rally in support of the anarchists. Throughout the 1940s and 1950s in Canada, 20,000 Canadian children, called the Duplessis orphans, were wrongfully certified as being mentally ill and as a result were committed to psychiatric institutions where they were allegedly forced to take psychiatric medication that they did not need and were abused. They were named after Maurice Duplessis, the premier of Quebec at the time, who deliberately committed these children to misappropriate additional subsidies from the federal government. Decades later in the 1990s, several of the orphans sued Quebec and the Catholic Church for the abuse and wrongdoing. In 1958, black pastor and activist Clennon Washington King Jr. tried enrolling at the University of Mississippi, which at the time was white, for summer classes; the local police secretly arrested and involuntarily committed him to a mental hospital for 12 days. Patients are able to sue if they believe that they have been wrongfully committed. In one instance, Junius Wilson, an African American man, was committed to Cherry Hospital in Goldsboro, North Carolina in 1925 for an alleged crime without a trial or conviction. He was castrated. He continued to be held at Cherry Hospital for the next 67 years of his life. It turned out he was deaf rather than mentally ill. In many U.S. states, sex offenders who have completed a period of incarceration can be civilly committed to a mental institution based on a finding of dangerousness due to a mental disorder. Although the United States Supreme Court determined that this practice does not constitute double jeopardy, organizations such as the American Psychiatric Association (APA) strongly oppose the practice. The Task Force on Sexually Dangerous Offenders, a component of APA's Council on Psychiatry and Law, reported that "in the opinion of the task force, sexual predator commitment laws represent a serious assault on the integrity of psychiatry, particularly with regard to defining mental illness and the clinical conditions for compulsory treatment. Moreover, by bending civil commitment to serve essentially non-medical purposes, statutes threaten to undermine the legitimacy of the medical model of commitment."
[ { "paragraph_id": 0, "text": "Involuntary commitment, civil commitment, or involuntary hospitalization/hospitalisation is a legal process through which an individual who is deemed by a qualified agent to have symptoms of severe mental disorder is detained in a psychiatric hospital (inpatient) where they can be treated involuntarily. This treatment may involve the administration of psychoactive drugs, including involuntary administration. In many jurisdictions, people diagnosed with mental health disorders can also be forced to undergo treatment while in the community; this is sometimes referred to as outpatient commitment and shares legal processes with commitment.", "title": "" }, { "paragraph_id": 1, "text": "Criteria for civil commitment are established by laws which vary between nations. Commitment proceedings often follow a period of emergency hospitalization, during which an individual with acute psychiatric symptoms is confined for a relatively short duration (e.g. 72 hours) in a treatment facility for evaluation and stabilization by mental health professionals who may then determine whether further civil commitment is appropriate or necessary. Civil commitment procedures may take place in a court or only involve physicians. If commitment does not involve a court there is normally an appeal process that does involve the judiciary in some capacity, though potentially through a specialist court.", "title": "" }, { "paragraph_id": 2, "text": "For most jurisdictions, involuntary commitment is applied to individuals believed to be experiencing a mental illness that impairs their ability to reason to such an extent that the agents of the law, state, or courts determine that decisions will be made for the individual under a legal framework. In some jurisdictions, this is a proceeding distinct from being found incompetent. Involuntary commitment is used in some degree for each of the following although different jurisdictions have different criteria. Some jurisdictions limit involuntary treatment to individuals who meet statutory criteria for presenting a danger to self or others. Other jurisdictions have broader criteria. The legal process by which commitment takes place varies between jurisdictions. Some jurisdictions have a formal court hearing where testimony and other evidence may also be submitted and the subject of the hearing is typically entitled to legal counsel and may challenge a commitment order through habeas corpus. Other jurisdictions have delegated these power to physicians, though may provide an appeal process that involves the judiciary but may also involve physicians. For example, in the UK a mental health tribunal consists of a judge, a medical member, and a lay representative.", "title": "Purpose" }, { "paragraph_id": 3, "text": "Training is gradually becoming available in mental health first aid to equip community members such as teachers, school administrators, police officers, and medical workers with training in recognizing, and authority in managing, situations where involuntary evaluations of behavior are applicable under law. The extension of first aid training to cover mental health problems and crises is a quite recent development. A mental health first aid training course was developed in Australia in 2001 and has been found to improve assistance provided to persons with an alleged mental illness or mental health crisis. This form of training has now spread to a number of other countries (Canada, Finland, Hong Kong, Ireland, Singapore, Scotland, England, Wales, and the United States). Mental health triage may be used in an emergency room to make a determination about potential risk and apply treatment protocols.", "title": "Purpose" }, { "paragraph_id": 4, "text": "Observation is sometimes used to determine whether a person warrants involuntary commitment. It is not always clear on a relatively brief examination whether a person should be committed.", "title": "Purpose" }, { "paragraph_id": 5, "text": "Austria, Belgium, Germany, Israel, the Netherlands, Northern Ireland, Russia, Taiwan, Ontario (Canada), and the United States have adopted commitment criteria based on the presumed danger of the defendant to self or to others.", "title": "Purpose" }, { "paragraph_id": 6, "text": "People with suicidal thoughts may act on these impulses and harm or kill themselves.", "title": "Purpose" }, { "paragraph_id": 7, "text": "People with psychosis are occasionally driven by their delusions or hallucinations to harm themselves or others. Research has found that those with schizophrenia are between 3.4 and 7.4 times more likely to engage in violent behaviour than members of the general public. However, because other confounding factors such as childhood adversity and poverty are correlated with both schizophrenia and violence it can be difficult to determine whether this effect is due to schizophrenia or other factors. In an attempt to avoid these confounding factors, researchers have tried comparing the rates of violence amongst people diagnosed with schizophrenia to their siblings in a similar manner to twin studies. In these studies people with schizophrenia are found to be between 1.3 and 1.8 times more likely to engage in violent behaviour.", "title": "Purpose" }, { "paragraph_id": 8, "text": "People with certain types of personality disorders can occasionally present a danger to themselves or others.", "title": "Purpose" }, { "paragraph_id": 9, "text": "This concern has found expression in the standards for involuntary commitment in every US state and in other countries as the danger to self or others standard, sometimes supplemented by the requirement that the danger be imminent. In some jurisdictions, the danger to self or others standard has been broadened in recent years to include need-for-treatment criteria such as \"gravely disabled\".", "title": "Purpose" }, { "paragraph_id": 10, "text": "Starting in the 1960s, there has been a worldwide trend toward moving psychiatric patients from hospital settings to less restricting settings in the community, a shift known as \"deinstitutionalization\". Because the shift was typically not accompanied by a commensurate development of community-based services, critics say that deinstitutionalization has led to large numbers of people who would once have been inpatients as instead being incarcerated or becoming homeless. In some jurisdictions, laws authorizing court-ordered outpatient treatment have been passed in an effort to compel individuals with chronic, untreated severe mental illness to take psychiatric medication while living outside the hospital (e.g. Laura's Law, Kendra's Law).", "title": "Deinstitutionalization" }, { "paragraph_id": 11, "text": "In a study of 269 patients from Vermont State Hospital done by Courtenay M. Harding and associates, about two-thirds of the ex-patients did well after deinstitutionalization.", "title": "Deinstitutionalization" }, { "paragraph_id": 12, "text": "In 1838, France enacted a law to regulate both the admissions into asylums and asylum services across the country. Édouard Séguin developed a systematic approach for training individuals with mental deficiencies, and, in 1839, he opened the first school for intellectually disabled people. His method of treatment was based on the idea that intellectually disabled people did not suffer from disease.", "title": "Around the world" }, { "paragraph_id": 13, "text": "In the United Kingdom, provision for the care of the mentally ill began in the early 19th century with a state-led effort. Public mental asylums were established in Britain after the passing of the 1808 County Asylums Act. This empowered magistrates to build rate-supported asylums in every county to house the many \"pauper lunatics\". Nine counties first applied, and the first public asylum opened in 1812 in Nottinghamshire. Parliamentary Committees were established to investigate abuses at private madhouses like Bethlem Hospital - its officers were eventually dismissed and national attention was focused on the routine use of bars, chains and handcuffs and the filthy conditions in which the inmates lived. However, it was not until 1828 that the newly appointed Commissioners in Lunacy were empowered to license and supervise private asylums.", "title": "Around the world" }, { "paragraph_id": 14, "text": "The Lunacy Act 1845 was a landmark in the treatment of the mentally ill, as it explicitly changed the status of mentally ill people to patients who required treatment. The Act created the Lunacy Commission, headed by Lord Shaftesbury, focusing on reform of the legislation concerning lunacy. The commission consisted of eleven Metropolitan Commissioners who were required to carry out the provisions of the Act; the compulsory construction of asylums in every county, with regular inspections on behalf of the Home Secretary. All asylums were required to have written regulations and to have a resident qualified physician. A national body for asylum superintendents - the Medico-Psychological Association - was established in 1866 under the Presidency of William A. F. Browne, although the body appeared in an earlier form in 1841.", "title": "Around the world" }, { "paragraph_id": 15, "text": "At the turn of the century, England and France combined had only a few hundred individuals in asylums. By the late 1890s and early 1900s, those so detained had risen to the hundreds of thousands. However, the idea that mental illness could be ameliorated through institutionalization was soon disappointed. Psychiatrists were pressured by an ever-increasing patient population. The average number of patients in asylums kept increasing. Asylums were quickly becoming almost indistinguishable from custodial institutions, and the reputation of psychiatry in the medical world had was at an extreme low.", "title": "Around the world" }, { "paragraph_id": 16, "text": "In the United States, the erection of state asylums began with the first law for the creation of one in New York, passed in 1842. The Utica State Hospital was opened approximately in 1850. The creation of this hospital, as of many others, was largely the work of Dorothea Lynde Dix, whose philanthropic efforts extended over many states, and in Europe as far as Constantinople. Many state hospitals in the United States were built in the 1850s and 1860s on the Kirkbride Plan, an architectural style meant to have curative effect.", "title": "Around the world" }, { "paragraph_id": 17, "text": "In the United States and most other developed societies, severe restrictions have been placed on the circumstances under which a person may be committed or treated against their will as such actions have been ruled by the United States Supreme Court and other national legislative bodies as a violation of civil rights and/or human rights. The Supreme Court case O'Connor v. Donaldson established that the mere presence of mental illness and the necessity for treatment are not sufficient by themselves to justify involuntary commitment, if the patient is capable of surviving in freedom and does not present a danger of harm to themselves or others. Criteria for involuntary commitment are generally set by the individual states, and often have both short- and long-term types of commitment. Short-term commitment tends to be a few days or less, requiring an examination by a medical professional, while longer-term commitment typically requires a court hearing, or sentencing as part of a criminal trial. Indefinite commitment is rare and is usually reserved for individuals who are violent or present an ongoing danger to themselves and others.", "title": "Around the world" }, { "paragraph_id": 18, "text": "New York City officials under several administrations have implemented programs involving the involuntary hospitalization of people with mental illnesses in the city. Some of these policies have involved reinterpreting the standard of \"harm to themselves or others\" to include neglecting their own well-being or posing a harm to themselves or others in the future. In 1987–88, a homeless woman named Joyce Brown worked with the New York Civil Liberties Union to challenge her forced hospitalization under a new Mayor Ed Koch administration program. The trial, which attracted significant media attention, ended in her favor, and while the city won on appeal she was ultimately released after a subsequent case determined she could not be forcibly medicated. In 2022, Mayor Eric Adams announced a similar compulsory hospitalization program, relying on similar legal interpretations.", "title": "Around the world" }, { "paragraph_id": 19, "text": "Historically, until the mid-1960s in most jurisdictions in the United States, all committals to public psychiatric facilities and most committals to private ones were involuntary. Since then, there have been alternating trends towards the abolition or substantial reduction of involuntary commitment, a trend known as deinstitutionalisation. In many currents, individuals can voluntarily admit themselves to a mental health hospital and may have more rights than those who are involuntarily committed. This practice is referred to as voluntary commitment.", "title": "Around the world" }, { "paragraph_id": 20, "text": "In the United States, Kansas v. Hendricks established the procedures for a long-term or indefinite form of commitment applicable to people convicted of some sexual offences.", "title": "Around the world" }, { "paragraph_id": 21, "text": "United Nations General Assembly Resolution 46/119, \"Principles for the Protection of Persons with Mental Illness and the Improvement of Mental Health Care\", is a non-binding resolution advocating certain broadly drawn procedures for the carrying out of involuntary commitment. These principles have been used in many countries where local laws have been revised or new ones implemented. The UN runs programs in some countries to assist in this process.", "title": "Around the world" }, { "paragraph_id": 22, "text": "The potential dangers of institutions have been noted and criticized by reformers/activists almost since their foundation. Charles Dickens was an outspoken and high-profile early critic, and several of his novels, in particular Oliver Twist and Hard Times demonstrate his insight into the damage that institutions can do to human beings.", "title": "Criticism" }, { "paragraph_id": 23, "text": "Enoch Powell, when Minister for Health in the early 1960s, was a later opponent who was appalled by what he witnessed on his visits to the asylums, and his famous \"water tower\" speech in 1961 called for the closure of all NHS asylums and their replacement by wards in general hospitals:", "title": "Criticism" }, { "paragraph_id": 24, "text": "\"There they stand, isolated, majestic, imperious, brooded over by the gigantic water-tower and chimney combined, rising unmistakable and daunting out of the countryside - the asylums which our forefathers built with such immense solidity to express the notions of their day. Do not for a moment underestimate their powers of resistance to our assault. Let me describe some of the defenses which we have to storm.\"", "title": "Criticism" }, { "paragraph_id": 25, "text": "Scandal after scandal followed, with many high-profile public inquiries. These involved the exposure of abuses such as unscientific surgical techniques such as lobotomy and the widespread neglect and abuse of vulnerable patients in the US and Europe. The growing anti-psychiatry movement in the 1960s and 1970s led in Italy to the first successful legislative challenge to the authority of the mental institutions, culminating in their closure.", "title": "Criticism" }, { "paragraph_id": 26, "text": "During the 1970s and 1990s the hospital population started to fall rapidly, mainly because of the deaths of long-term inmates. Significant efforts were made to re-house large numbers of former residents in a variety of suitable or otherwise alternative accommodation. The first 1,000+ bed hospital to close was Darenth Park Hospital in Kent, swiftly followed by many more across the UK. The haste of these closures, driven by the Conservative governments led by Margaret Thatcher and John Major, led to considerable criticism in the press, as some individuals slipped through the net into homelessness or were discharged to poor quality private sector mini-institutions.", "title": "Criticism" }, { "paragraph_id": 27, "text": "There are instances in which mental health professionals have wrongfully deemed individuals to have been displaying the symptoms of a mental disorder, and committed the individual for treatment in a psychiatric hospital upon such grounds. Involuntary commitment is undertaken when the information provided to the doctor during the period of evaluation appears to show that the patient is a danger to themselves or others. This can be challenging in some scenarios, when there isn't enough collateral information to disprove the claims erratic or dangerous behaviors. Therefore, claims of wrongful commitment are a common theme in the anti-psychiatry movement.", "title": "Criticism" }, { "paragraph_id": 28, "text": "In 1860, the case of Elizabeth Packard, who was wrongfully committed that year and filed a lawsuit and won thereafter, highlighted the issue of wrongful involuntary commitment. In 1887, investigative journalist Nellie Bly went undercover at an asylum in New York City to expose the terrible conditions that mental patients at the time had to deal with. She published her findings and experiences as articles in New York World, and later made the articles into one book called Ten Days in a Mad-House.", "title": "Criticism" }, { "paragraph_id": 29, "text": "In the first half of the twentieth century there were a few high-profile cases of wrongful commitment based on racism or punishment for political dissenters. In the former Soviet Union, psychiatric hospitals were used as prisons to isolate political prisoners from the rest of society. British playwright Tom Stoppard wrote Every Good Boy Deserves Favour about the relationship between a patient and his doctor in one of these hospitals. Stoppard was inspired by a meeting with a Russian exile. In 1927, after the execution of Sacco and Vanzetti in the United States, demonstrator Aurora D'Angelo was sent to a mental health facility for psychiatric evaluation after she participated in a rally in support of the anarchists. Throughout the 1940s and 1950s in Canada, 20,000 Canadian children, called the Duplessis orphans, were wrongfully certified as being mentally ill and as a result were committed to psychiatric institutions where they were allegedly forced to take psychiatric medication that they did not need and were abused. They were named after Maurice Duplessis, the premier of Quebec at the time, who deliberately committed these children to misappropriate additional subsidies from the federal government. Decades later in the 1990s, several of the orphans sued Quebec and the Catholic Church for the abuse and wrongdoing. In 1958, black pastor and activist Clennon Washington King Jr. tried enrolling at the University of Mississippi, which at the time was white, for summer classes; the local police secretly arrested and involuntarily committed him to a mental hospital for 12 days.", "title": "Criticism" }, { "paragraph_id": 30, "text": "Patients are able to sue if they believe that they have been wrongfully committed. In one instance, Junius Wilson, an African American man, was committed to Cherry Hospital in Goldsboro, North Carolina in 1925 for an alleged crime without a trial or conviction. He was castrated. He continued to be held at Cherry Hospital for the next 67 years of his life. It turned out he was deaf rather than mentally ill.", "title": "Criticism" }, { "paragraph_id": 31, "text": "In many U.S. states, sex offenders who have completed a period of incarceration can be civilly committed to a mental institution based on a finding of dangerousness due to a mental disorder. Although the United States Supreme Court determined that this practice does not constitute double jeopardy, organizations such as the American Psychiatric Association (APA) strongly oppose the practice. The Task Force on Sexually Dangerous Offenders, a component of APA's Council on Psychiatry and Law, reported that \"in the opinion of the task force, sexual predator commitment laws represent a serious assault on the integrity of psychiatry, particularly with regard to defining mental illness and the clinical conditions for compulsory treatment. Moreover, by bending civil commitment to serve essentially non-medical purposes, statutes threaten to undermine the legitimacy of the medical model of commitment.\"", "title": "Criticism" } ]
Involuntary commitment, civil commitment, or involuntary hospitalization/hospitalisation is a legal process through which an individual who is deemed by a qualified agent to have symptoms of severe mental disorder is detained in a psychiatric hospital (inpatient) where they can be treated involuntarily. This treatment may involve the administration of psychoactive drugs, including involuntary administration. In many jurisdictions, people diagnosed with mental health disorders can also be forced to undergo treatment while in the community; this is sometimes referred to as outpatient commitment and shares legal processes with commitment. Criteria for civil commitment are established by laws which vary between nations. Commitment proceedings often follow a period of emergency hospitalization, during which an individual with acute psychiatric symptoms is confined for a relatively short duration in a treatment facility for evaluation and stabilization by mental health professionals who may then determine whether further civil commitment is appropriate or necessary. Civil commitment procedures may take place in a court or only involve physicians. If commitment does not involve a court there is normally an appeal process that does involve the judiciary in some capacity, though potentially through a specialist court.
2001-12-19T18:52:34Z
2023-11-26T02:36:48Z
[ "Template:Full citation needed", "Template:Criticism section", "Template:ISBN", "Template:Authority control", "Template:Wikiquote", "Template:Efn", "Template:Main", "Template:More citations needed section", "Template:Pages needed", "Template:Bluebook website", "Template:Cbignore", "Template:Refend", "Template:Short description", "Template:Redirect2", "Template:About", "Template:Unreferenced section", "Template:Citation needed", "Template:Columns-list", "Template:Cite web", "Template:Mental health law", "Template:Anti-psychiatry", "Template:UN doc", "Template:Which", "Template:Cite book", "Template:Cite court", "Template:Refbegin", "Template:See also", "Template:Notelist", "Template:Cite encyclopedia", "Template:Blockquote", "Template:Reflist", "Template:Cite journal", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Involuntary_commitment
15,417
Intermolecular force
An intermolecular force (IMF) (or secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics. The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell and Boltzmann. Attractive intermolecular forces are categorized into the following types: Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and Lennard-Jones potentials. A hydrogen bond is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has. Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural. The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge. It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions. Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol. Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3). Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces". Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding. An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu) in water. An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule. The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies. The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent. They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation: where d = electric dipole moment, ε 0 {\displaystyle \varepsilon _{0}} = permitivity of free space, ε r {\displaystyle \varepsilon _{r}} = dielectric constant of surrounding material, T = temperature, k B {\displaystyle k_{\text{B}}} = Boltzmann constant, and r = distance between molecules. The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions. The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye. One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation: where α 2 {\displaystyle \alpha _{2}} = polarizability. This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force. The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range. This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way procceding the thousands of enzymatic reactions, so important for living organisms. Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor). In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature. When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces. Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this. Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
[ { "paragraph_id": 0, "text": "An intermolecular force (IMF) (or secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.", "title": "" }, { "paragraph_id": 1, "text": "The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell and Boltzmann.", "title": "" }, { "paragraph_id": 2, "text": "Attractive intermolecular forces are categorized into the following types:", "title": "" }, { "paragraph_id": 3, "text": "Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and Lennard-Jones potentials.", "title": "" }, { "paragraph_id": 4, "text": "A hydrogen bond is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.", "title": "Hydrogen bonding" }, { "paragraph_id": 5, "text": "Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.", "title": "Hydrogen bonding" }, { "paragraph_id": 6, "text": "The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge. It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions. Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.", "title": "Beta bonding" }, { "paragraph_id": 7, "text": "Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).", "title": "Dipole–dipole and similar interactions " }, { "paragraph_id": 8, "text": "Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.", "title": "Dipole–dipole and similar interactions " }, { "paragraph_id": 9, "text": "The Keesom interaction is a van der Waals force. It is discussed further in the section \"Van der Waals forces\".", "title": "Dipole–dipole and similar interactions " }, { "paragraph_id": 10, "text": "Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.", "title": "Dipole–dipole and similar interactions " }, { "paragraph_id": 11, "text": "An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu) in water.", "title": "Dipole–dipole and similar interactions " }, { "paragraph_id": 12, "text": "An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.", "title": "Dipole–dipole and similar interactions " }, { "paragraph_id": 13, "text": "The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies.", "title": "Van der Waals forces" }, { "paragraph_id": 14, "text": "The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.", "title": "Van der Waals forces" }, { "paragraph_id": 15, "text": "They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:", "title": "Van der Waals forces" }, { "paragraph_id": 16, "text": "where d = electric dipole moment, ε 0 {\\displaystyle \\varepsilon _{0}} = permitivity of free space, ε r {\\displaystyle \\varepsilon _{r}} = dielectric constant of surrounding material, T = temperature, k B {\\displaystyle k_{\\text{B}}} = Boltzmann constant, and r = distance between molecules.", "title": "Van der Waals forces" }, { "paragraph_id": 17, "text": "The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.", "title": "Van der Waals forces" }, { "paragraph_id": 18, "text": "The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye.", "title": "Van der Waals forces" }, { "paragraph_id": 19, "text": "One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:", "title": "Van der Waals forces" }, { "paragraph_id": 20, "text": "where α 2 {\\displaystyle \\alpha _{2}} = polarizability.", "title": "Van der Waals forces" }, { "paragraph_id": 21, "text": "This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.", "title": "Van der Waals forces" }, { "paragraph_id": 22, "text": "The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.", "title": "Van der Waals forces" }, { "paragraph_id": 23, "text": "This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way procceding the thousands of enzymatic reactions, so important for living organisms.", "title": "Relative strength of forces" }, { "paragraph_id": 24, "text": "Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).", "title": "Effect on the behavior of gases" }, { "paragraph_id": 25, "text": "In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.", "title": "Effect on the behavior of gases" }, { "paragraph_id": 26, "text": "When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.", "title": "Effect on the behavior of gases" }, { "paragraph_id": 27, "text": "Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this.", "title": "Quantum mechanical theories" }, { "paragraph_id": 28, "text": "Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.", "title": "Quantum mechanical theories" } ]
An intermolecular force (IMF) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics. The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell and Boltzmann. Attractive intermolecular forces are categorized into the following types: Hydrogen bonding Ion–dipole forces and ion–induced dipole forces Van der Waals forces – Keesom force, Debye force, and London dispersion force Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and Lennard-Jones potentials.
2001-12-19T21:09:10Z
2023-12-13T15:34:08Z
[ "Template:Citation", "Template:Chemical bonds", "Template:Short description", "Template:Main", "Template:Anchor", "Template:Dipole-dipole-interaction-in-HCl-2D", "Template:Cite web", "Template:Authority control", "Template:Div col end", "Template:Reflist", "Template:Cite book", "Template:Cite journal", "Template:Div col", "Template:GoldBookRef" ]
https://en.wikipedia.org/wiki/Intermolecular_force
15,420
IRQ
IRQ may refer to:
[ { "paragraph_id": 0, "text": "IRQ may refer to:", "title": "" } ]
IRQ may refer to: Interrupt request, a computer hardware signal Iraq Qeshm Air
2021-09-17T02:20:15Z
[ "Template:Wiktionary", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/IRQ
15,422
List of Internet top-level domains
This list of Internet top-level domains (TLD) contains top-level domains, which are those domains in the DNS root zone of the Domain Name System of the Internet. A list of the top-level domains by the Internet Assigned Numbers Authority (IANA) is maintained at the Root Zone Database. IANA also oversees the approval process for new proposed top-level domains for ICANN. As of April 2021, their root domain contains 1502 top-level domains. As of March 2021, the IANA root database includes 1589 TLDs. That also includes 68 that are not assigned (revoked), 8 that are retired and 11 test domains. Those are not represented in IANA's listing and are not in root.zone file (root.zone file also includes one root domain). IANA distinguishes the following groups of top-level domains: Seven generic top-level domains were created early in the development of the Internet, and predate the creation of ICANN in 1998. As of 20 May 2017, there were 255 country-code top-level domains, purely in the Latin alphabet, using two-character codes. As of June 2022, the number was 316, with the addition of internationalized domains. Internationalised domain names have been proposed for Japan and Libya. All of these TLDs are internationalized domain names (IDN) and support second-level IDNs. ICANN/IANA has created some Special-Use domain names which are meant for special technical purposes. ICANN/IANA owns all of the Special-Use domain names. Besides the TLDs managed (or at least tracked) by IANA or ICANN, other independent groups have created, or had attempted to create, their own TLDs with varying technical specifications, functions, and outcomes. Blockchain-based domains are registered and exchanged using a public blockchain like Ethereum. Often times, these domains serve specific functions such as creating human-readable references to smart contract addresses used in DApps or personal wallet addresses. Generally, these non-standard domains are unreachable through the normal DNS resolution process and instead require clients to use some sort of transparent web proxy or gateway to access them In the case of alternative DNS roots, organizations or projects make use of the same mechanisms of the DNS but instead take on the role of ICANN in managing and administering an entirely separate root zone, thus having the ability to create new TLDs independently. However, this doesn't make these domains any less isolated from the rest of the internet, though the ability for clients to resolve them theoretically only requires switching to a recursive DNS resolver that recognizes and serves records underneath the alternate root zone.
[ { "paragraph_id": 0, "text": "This list of Internet top-level domains (TLD) contains top-level domains, which are those domains in the DNS root zone of the Domain Name System of the Internet. A list of the top-level domains by the Internet Assigned Numbers Authority (IANA) is maintained at the Root Zone Database. IANA also oversees the approval process for new proposed top-level domains for ICANN. As of April 2021, their root domain contains 1502 top-level domains. As of March 2021, the IANA root database includes 1589 TLDs. That also includes 68 that are not assigned (revoked), 8 that are retired and 11 test domains. Those are not represented in IANA's listing and are not in root.zone file (root.zone file also includes one root domain).", "title": "" }, { "paragraph_id": 1, "text": "IANA distinguishes the following groups of top-level domains:", "title": "Types" }, { "paragraph_id": 2, "text": "Seven generic top-level domains were created early in the development of the Internet, and predate the creation of ICANN in 1998.", "title": "Original top-level domains" }, { "paragraph_id": 3, "text": "As of 20 May 2017, there were 255 country-code top-level domains, purely in the Latin alphabet, using two-character codes. As of June 2022, the number was 316, with the addition of internationalized domains.", "title": "Country code top-level domains" }, { "paragraph_id": 4, "text": "Internationalised domain names have been proposed for Japan and Libya.", "title": "Country code top-level domains" }, { "paragraph_id": 5, "text": "All of these TLDs are internationalized domain names (IDN) and support second-level IDNs.", "title": "Internationalized generic top-level domains" }, { "paragraph_id": 6, "text": "ICANN/IANA has created some Special-Use domain names which are meant for special technical purposes. ICANN/IANA owns all of the Special-Use domain names.", "title": "Special-Use Domains" }, { "paragraph_id": 7, "text": "Besides the TLDs managed (or at least tracked) by IANA or ICANN, other independent groups have created, or had attempted to create, their own TLDs with varying technical specifications, functions, and outcomes.", "title": "Non-IANA domains" }, { "paragraph_id": 8, "text": "Blockchain-based domains are registered and exchanged using a public blockchain like Ethereum. Often times, these domains serve specific functions such as creating human-readable references to smart contract addresses used in DApps or personal wallet addresses. Generally, these non-standard domains are unreachable through the normal DNS resolution process and instead require clients to use some sort of transparent web proxy or gateway to access them", "title": "Non-IANA domains" }, { "paragraph_id": 9, "text": "In the case of alternative DNS roots, organizations or projects make use of the same mechanisms of the DNS but instead take on the role of ICANN in managing and administering an entirely separate root zone, thus having the ability to create new TLDs independently. However, this doesn't make these domains any less isolated from the rest of the internet, though the ability for clients to resolve them theoretically only requires switching to a recursive DNS resolver that recognizes and serves records underneath the alternate root zone.", "title": "Non-IANA domains" } ]
This list of Internet top-level domains (TLD) contains top-level domains, which are those domains in the DNS root zone of the Domain Name System of the Internet. A list of the top-level domains by the Internet Assigned Numbers Authority (IANA) is maintained at the Root Zone Database. IANA also oversees the approval process for new proposed top-level domains for ICANN. As of April 2021, their root domain contains 1502 top-level domains. As of March 2021, the IANA root database includes 1589 TLDs. That also includes 68 that are not assigned (revoked), 8 that are retired and 11 test domains. Those are not represented in IANA's listing and are not in root.zone file.
2001-12-20T13:53:22Z
2023-12-09T13:08:38Z
[ "Template:Use dmy dates", "Template:Va", "Template:Cite book", "Template:As of", "Template:Main", "Template:Yes", "Template:Unknown", "Template:Webarchive", "Template:Multiple issues", "Template:Clear", "Template:Flagdeco", "Template:Reflist", "Template:Short description", "Template:Sup", "Template:Generic top-level domains", "Template:Cite news", "Template:In lang", "Template:Cite press release", "Template:Cite IETF", "Template:Anchor", "Template:Flagicon image", "Template:Flagicon", "Template:Ill", "Template:TOC limit", "Template:No", "Template:Flag", "Template:Cite web", "Template:URL", "Template:Interlanguage link" ]
https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains
15,428
Idealism
Idealism in philosophy, also known as philosophical idealism or metaphysical idealism, is the set of metaphysical perspectives asserting that, most fundamentally, reality is equivalent to mind, spirit, or consciousness; that reality is entirely a mental construct; or that ideas are the highest form of reality or have the greatest claim to being considered "real". The radical latter view is often first credited to the Ancient Greek philosopher Plato as part of a theory now known as Platonic idealism. Besides in Western philosophy, idealism also appears in some Indian philosophy, namely in Vedanta, one of the orthodox schools of Hindu philosophy, and in some streams of Buddhism. Epistemologically, idealism is accompanied by philosophical skepticism about the possibility of knowing the existence of any thing that is independent of the human mind. Ontologically, idealism asserts that the existence of things depends upon the human mind; thus, ontological idealism rejects the perspectives of physicalism and dualism, because neither perspective gives ontological priority to the human mind. In contrast to materialism, idealism asserts the primacy of consciousness as the origin and prerequisite of phenomena. During the European Enlightenment, certain qualified versions of idealism arose, such as George Berkeley's subjective idealism, which proposed that physical objects exist only to the extent that one perceives them and thus the physical world does not exist outside of a mind. According to Berkeley, who was an Anglican Bishop, a single eternal mind keeps all of physical reality stable, and this is God. By contrast, Immanuel Kant said that idealism "does not concern the existence of things", but that "our modes of representation" of things such as space and time are not "determinations that belong to things in themselves", but are essential features of the human mind. Thus, Kant's transcendental idealism proposes that objects of experience rely upon their existence in the human mind that perceives the objects, and that the nature of the object-in-itself is external to human experience, unable to be conceived without the application of categories, which give structure to the human experience of reality. Kant's philosophy would be reinterpreted by Arthur Schopenhauer and by German idealists such as J.G. Fichte, F.W.J. Schelling, and G.W.F. Hegel. This tradition, which emphasized the mental or "ideal" character of all phenomena, gave birth to idealistic and subjectivist schools ranging from British idealism to phenomenalism to existentialism. Indian philosophers proposed the earliest arguments that the world of experience is grounded in the mind's perception of the physical world. Hindu idealism gave panentheistic arguments for the existence of an all-pervading consciousness as the true nature, as the true grounding of reality. In contrast, the Yogācāra school, which arose within Mahayana Buddhism in India in the 4th century AD, based its "mind-only" idealism to a greater extent on phenomenological analyses of personal experience. Idealism as a philosophy came under heavy attack in the West at the turn of the 20th century. The most influential critics of both epistemological and ontological idealism were G. E. Moore and Bertrand Russell, but its critics also included the new realists. The attacks by Moore and Russell were so influential that even more than 100 years later "any acknowledgment of idealistic tendencies is viewed in the English-speaking world with reservation." However, many aspects and paradigms of idealism did still have a large influence on subsequent philosophy. Idealism is a term with several related meanings. It comes via Latin idea from the Ancient Greek idea (ἰδέα) from idein (ἰδεῖν), meaning "to see". The term entered the English language by 1743. It was first used in the abstract metaphysical sense "belief that reality is made up only of ideas" by Christian Wolff in 1747. The term re-entered the English language in this abstract sense by 1796. In ordinary language, as when speaking of Woodrow Wilson's political idealism, it generally suggests the priority of ideals, principles, values, and goals over concrete realities. Idealists are understood to represent the world as it might or should be, unlike pragmatists, who focus on the world as it presently is. In the arts, similarly, idealism affirms imagination and attempts to realize a mental conception of beauty, a standard of perfection, juxtaposed to aesthetic naturalism and realism. The term idealism is also sometimes used in a sociological sense, which emphasizes how human ideas—especially beliefs and values—shape society. Metaphysical idealism is an ontological doctrine that holds that reality itself is incorporeal or experiential at its core. Beyond this, idealists disagree on which aspects of the mental are more basic. Platonic idealism affirms that abstractions are more basic to reality than the things we perceive, while subjective idealists and phenomenalists tend to privilege sensory experience over abstract reasoning. Epistemological idealism is the view that reality can only be known through ideas, that only psychological experience can be apprehended by the mind. Subjective idealists like George Berkeley are anti-realists in terms of a mind-independent world. However, not all idealists restrict the real or the knowable to our immediate subjective experience. Objective idealists make claims about a transempirical world, but simply deny that this world is essentially divorced from or ontologically prior to the mental. Thus, Plato affirms an objective and knowable reality transcending our subjective awareness—a rejection of epistemological idealism—but proposes that this reality is grounded in ideal entities, a form of metaphysical idealism. Nor do all metaphysical idealists agree on the nature of the ideal; for instance, according to Plato, the fundamental entities were non-mental abstract forms. As a rule, transcendental idealists like Kant affirm idealism's epistemic side without committing themselves to whether reality is ultimately mental; objective idealists like Plato affirm reality's metaphysical basis in the mental or abstract without restricting their epistemology to ordinary experience; and subjective idealists like Berkeley affirm both metaphysical and epistemological idealism. Idealism as a form of metaphysical monism holds that consciousness, not matter, is the ground of all being. It is monist because it holds that there is only one type of thing in the universe and idealist because it holds that one thing to be consciousness. Anaxagoras (480 BC) taught that "all things" were created by Nous ("Mind"). He held that Mind held the cosmos together and gave human beings a connection to the cosmos or a pathway to the divine. Plato's theory of forms or "ideas" describes ideal forms (for example the platonic solids in geometry or abstracts like Goodness and Justice), as universals existing independently of any particular instance. Arne Grøn calls this doctrine "the classic example of a metaphysical idealism as a transcendent idealism", while Simone Klein calls Plato "the earliest representative of metaphysical objective idealism." Nevertheless, Plato holds that matter is real, though transitory and imperfect, and is perceived by our body and its senses and given existence by the eternal ideas that are perceived directly by our rational soul. Plato was therefore a metaphysical and epistemological dualist, an outlook that modern idealism has striven to avoid: Plato's thought cannot therefore be counted as idealist in the modern sense. With the neoplatonist Plotinus, wrote Nathaniel Alfred Boll "there even appears, probably for the first time in Western philosophy, idealism that had long been current in the East even at that time, for it taught... that the soul has made the world by stepping from eternity into time..." Similarly, in regard to passages from the Enneads, "The only space or place of the world is the soul" and "Time must not be assumed to exist outside the soul." Ludwig Noiré wrote: "For the first time in Western philosophy we find idealism proper in Plotinus". However, Plotinus does not address whether we know external objects, unlike Schopenhauer and other modern philosophers. Christian theologians have held idealist views, often based on neoplatonism, despite the influence of Aristotelian scholasticism from the 12th century onward. However, there is certainly a sense in which the scholastics retained the idealism that came via Augustine right back to Plato. Later western theistic idealism such as that of Hermann Lotze offers a theory of the "world ground" in which all things find their unity: it has been widely accepted by Protestant theologians. Several modern religious movements, for example the organizations within the New Thought Movement and the Unity Church, may be said to have a particularly idealist orientation. The theology of Christian Science includes a form of idealism: it teaches that all that truly exists is God and God's ideas; that the world as it appears to the senses is a distortion of the underlying spiritual reality, a distortion that may be corrected (both conceptually and in terms of human experience) through a reorientation (spiritualization) of thought. Wang Yangming, a Ming Chinese neo-Confucian philosopher, official, educationist, calligraphist and general, held that objects do not exist entirely apart from the mind because the mind shapes them. It is not the world that shapes the mind but the mind that gives reason to the world, so the mind alone is the source of all reason, having an inner light, an innate moral goodness and understanding of what is good. There are currents of idealism throughout Indian philosophy, ancient and modern. Hindu idealism often takes the form of monism or non-dualism, espousing the view that a unitary consciousness is the essence or meaning of the phenomenal reality and plurality. Buddhist idealism on the other hand is more epistemic and is not a metaphysical monism, which Buddhists consider eternalistic and hence not the Middle Way between extremes espoused by the Buddha. The oldest reference to Idealism in Vedic texts is in Purusha Sukta of the Rig Veda. This Sukta espouses panentheism by presenting cosmic being Purusha as both pervading all universe and yet being transcendent to it. Absolute idealism can be seen in Chāndogya Upaniṣad, where things of the objective world like the five elements and the subjective world such as will, hope, memory etc. are seen to be emanations from the Self. Idealist notions have been propounded by the Vedanta schools of thought, which use the Vedas, especially the Upanishads as their key texts. Idealism was opposed by dualists Samkhya, the atomists Vaisheshika, the logicians Nyaya, the linguists Mimamsa and the materialists Cārvāka. There are various sub schools of Vedanta, like Advaita Vedanta (non-dual), Vishishtadvaita and Bhedabheda Vedanta (difference and non-difference). The schools of Vedanta all attempt to explain the nature and relationship of Brahman (universal soul or Self) and Atman (individual self), which they see as the central topic of the Vedas. One of the earliest attempts at this was Bādarāyaņa's Brahma Sutras, which is canonical for all Vedanta sub-schools. Advaita Vedanta is a major sub school of Vedanta which holds a non-dual Idealistic metaphysics. According to Advaita thinkers like Adi Shankara (788–820) and his contemporary Maṇḍana Miśra, Brahman, the single unitary consciousness or absolute awareness, appears as the diversity of the world because of maya or illusion, and hence perception of plurality is mithya, error. The world and all beings or souls in it have no separate existence from Brahman, universal consciousness, and the seemingly independent soul (jiva) is identical to Brahman. These doctrines are represented in verses such as brahma satyam jagan mithya; jīvo brahmaiva na aparah (Brahman is alone True, and this world of plurality is an error; the individual self is not different from Brahman). Other forms of Vedanta like the Vishishtadvaita of Ramanuja and the Bhedabheda of Bhāskara are not as radical in their non-dualism, accepting that there is a certain difference between individual souls and Brahman. Dvaita school of Vedanta by Madhvacharya maintains the opposing view that the world is real and eternal. It also argues that real Atman fully depends on the reflection of independent Brahman. The Tantric tradition of Kashmir Shaivism has also been categorized by scholars as a form of Idealism. The key thinker of this tradition is the Kashmirian Abhinavagupta (975–1025 CE). Modern Vedic Idealism was defended by the influential Indian philosopher Sarvepalli Radhakrishnan in his 1932 An Idealist View of Life and other works, which espouse Advaita Vedanta. The essence of Hindu Idealism is captured by such modern writers as Sri Nisargadatta Maharaj, Sri Aurobindo, P. R. Sarkar, and Sohail Inayatullah. Buddhist views which can be said to be similar to Idealism appear in Mahayana Buddhist texts such as the Samdhinirmocana sutra, Laṅkāvatāra Sūtra, Dashabhumika sutra, etc. These were later expanded upon by Indian Buddhist philosophers of the influential Yogacara school, like Vasubandhu, Asaṅga, Dharmakīrti, and Śāntarakṣita. Yogacara thought was also promoted in China by Chinese philosophers and translators like Xuanzang. There is a modern scholarly disagreement about whether Yogacara Buddhism can be said to be a form of idealism. As Saam Trivedi notes: "on one side of the debate, writers such as Jay Garfield, Jeffrey Hopkins, Paul Williams, and others maintain the idealism label, while on the other side, Stefan Anacker, Dan Lusthaus, Richard King, Thomas Kochumuttom, Alex Wayman, Janice Dean Willis, and others have argued that Yogacara is not idealist." The central point of issue is what Buddhist philosophers like Vasubandhu who used the term vijñapti-matra ("representation-only" or "cognition-only") and formulated arguments to refute external objects actually meant to say. Vasubandhu's works include a refutation of external objects or externality itself and argues that the true nature of reality is beyond subject-object distinctions. He views ordinary consciousness experience as deluded in its perceptions of an external world separate from itself and instead argues that all there is Vijñapti (representation or conceptualization). Hence Vasubandhu begins his Vimsatika with the verse: All this is consciousness-only, because of the appearance of non-existent objects, just as someone with an optical disorder may see non-existent nets of hair. Likewise, the Buddhist philosopher Dharmakirti's view of the apparent existence of external objects is summed up by him in the Pramānaṿārttika ('Commentary on Logic and Epistemology'): Cognition experiences itself, and nothing else whatsoever. Even the particular objects of perception, are by nature just consciousness itself. While some writers like Jay Garfield hold that Vasubandhu is a metaphysical idealist, others see him as closer to an epistemic idealist like Kant who holds that our knowledge of the world is simply knowledge of our own concepts and perceptions of a transcendental world. Sean Butler upholding that Yogacara is a form of idealism, albeit its own unique type, notes the similarity of Kant's categories and Yogacara's Vāsanās, both of which are simply phenomenal tools with which the mind interprets the noumenal realm. Unlike Kant however who holds that the noumenon or thing-in-itself is unknowable to us, Vasubandhu holds that ultimate reality is knowable, but only through non-conceptual yogic perception of a highly trained meditative mind. Writers like Dan Lusthaus who hold that Yogacara is not a metaphysical idealism point out, for example, that Yogācāra thinkers did not focus on consciousness to assert it as ontologically real, but simply to analyze how our experiences and thus our suffering is created. As Lusthaus notes: "no Indian Yogācāra text ever claims that the world is created by mind. What they do claim is that we mistake our projected interpretations of the world for the world itself, i.e. we take our own mental constructions to be the world." Lusthaus notes that there are similarities to Western epistemic idealists like Kant and Husserl, enough so that Yogacara can be seen as a form of epistemological idealism. However he also notes key differences like the concepts of karma and nirvana. Saam Trivedi meanwhile notes the similarities between epistemic idealism and Yogacara, but adds that Yogacara Buddhism is in a sense its own theory. Similarly, Thomas Kochumuttom sees Yogacara as "an explanation of experience, rather than a system of ontology" and Stefan Anacker sees Vasubandhu's philosophy as a form of psychology and as a mainly therapeutic enterprise. Subjective idealism (also known as immaterialism) describes a relationship between experience and the world in which objects are no more than collections or bundles of sense data in the perceiver. Proponents include Berkeley, Bishop of Cloyne, an Anglo-Irish philosopher who advanced a theory he called "immaterialism", later referred to as "subjective idealism", contending that individuals can only know sensations and ideas of objects directly, not abstractions such as "matter", and that ideas also depend upon being perceived for their very existence - esse est percipi; "to be is to be perceived". Arthur Collier published similar assertions though there seems to have been no influence between the two contemporary writers. The only knowable reality is the represented image of an external object. Matter as a cause of that image, is unthinkable and therefore nothing to us. An external world as absolute matter unrelated to an observer does not exist as far as we are concerned. The universe cannot exist as it appears if there is no perceiving mind. Collier was influenced by An Essay Towards the Theory of the Ideal or Intelligible World by Cambridge Platonist John Norris (1701). Paul Brunton, a British philosopher, mystic, traveler, and guru, taught a type of idealism called "mentalism", similar to that of Bishop Berkeley, proposing a master world-image, projected or manifested by a world-mind, and an infinite number of individual minds participating. A tree does not cease to exist if nobody sees it because the world-mind is projecting the idea of the tree to all minds. Epistemological idealism is a subjectivist position in epistemology that holds that what one knows about an object exists only in one's mind. Proponents include Brand Blanshard. A. A. Luce and John Foster are other subjectivists. Luce, in Sense without Matter (1954), attempts to bring Berkeley up to date by modernizing his vocabulary and putting the issues he faced in modern terms, and treats the Biblical account of matter and the psychology of perception and nature. Foster's The Case for Idealism argues that the physical world is the logical creation of natural, non-logical constraints on human sense-experience. Foster's latest defense of his views (phenomenalistic idealism) is in his book A World for Us: The Case for Phenomenalistic Idealism. Critics of subjective idealism include Bertrand Russell's popular 1912 book The Problems of Philosophy, Australian philosopher David Stove, Alan Musgrave, and John Searle. Transcendental idealism is a doctrine founded by Immanuel Kant in the eighteenth century. It maintains that we cannot known more of reality that its appearance to us, that is, according to our capacities of sensibility and understanding. His Critique of Pure Reason (2nd ed.) contains a section entitled "Refutation of Idealism", which distinguishes transcendental idealism from Descartes's sceptical idealism and Berkeley's anti-realist strain of subjective idealism. The section "Paralogisms of Pure Reason" is also an implicit critique of Descartes's idealism. Kant says that it is not possible to infer the "I" as an object (Descartes' cogito ergo sum) purely from "the spontaneity of thought". Kant focused on ideas drawn from British philosophers such as Locke, Berkeley and Hume, but distinguished his transcendental or critical idealism from previous varieties: The thesis of all genuine idealists, from the Eleatic School up to Bishop Berkeley, is contained in this formula: "All cognition through the senses and experience is nothing but sheer illusion, and there is truth only in the ideas of pure understanding and reason." The principle that governs and determines my idealism throughout is, on the contrary: "All cognition of things out of mere pure understanding or pure reason is nothing but sheer illusion, and there is truth only in experience." According to the version of transcendental idealism propounded by Arthur Schopenhauer, mental pictures are what constitute subjective knowledge. The ideal, for him, is what can be attributed to our own minds; the images in our head are what comprise the ideal. This is to say that we are restricted to our own consciousness. The world that appears is only a representation, which is all that we directly and immediately know. All objects that are external to the mind are known indirectly. In his own words, "the objective, as such, always and essentially has its existence in the consciousness of a subject; it is therefore the subject's representation, and consequently is conditioned by the subject, and moreover by the subject's forms of representation, which belong to the subject and not to the object." Charles Bernard Renouvier was the first philosopher in France to formulate a complete idealistic system since Nicolas Malebranche. His system is based on Immanuel Kant's, as his chosen term "néo-criticisme" indicates. It is a transformation rather than a continuation of Kantianism. Objective idealism asserts that the reality of experiencing combines and transcends the realities of the object experienced and of the mind of the observer. Proponents include Thomas Hill Green, Josiah Royce, Benedetto Croce, and Charles Sanders Peirce. F. W. J. Schelling (1775–1854) claimed that J. G. Fichte's "I" needs the Not-I, because there is no subject without object, and vice versa. So there is no difference between the subjective and the objective, that is, the ideal and the real. This is Schelling's "absolute identity": the ideas or mental images in the mind are identical to the extended objects which are external to the mind. Absolute idealism is G. W. F. Hegel's account of how existence is comprehensible as an all-inclusive whole. Hegel called his philosophy "absolute" idealism in contrast to the "subjective idealism" of Berkeley and the "transcendental idealism" of Kant and Fichte. The exercise of reason and intellect enables the philosopher to know ultimate historical reality, the phenomenological constitution of self-determination, the dialectical development of self-awareness in the realm of history. In his Science of Logic (1812–1814) Hegel argues that finite qualities are not fully "real" because they depend on other finite qualities to determine them. Qualitative infinity, on the other hand, would be more self-determining and hence more fully real. Similarly finite natural things are less "real"—because they are less self-determining—than spiritual things like morally responsible people, ethical communities and God. So any doctrine, such as materialism, that asserts that finite qualities or natural objects are fully real is mistaken. Hegel certainly intends to preserve what he takes to be true of German Idealism, in particular Kant's insistence that ethical reason can and does go beyond finite inclinations. For Hegel there must be some identity of thought and being for the "subject" (any human observer) to be able to know any observed "object" (any external entity, possibly even another human) at all. Under Hegel's concept of "subject-object identity", subject and object both have spirit (Hegel's ersatz, redefined, nonsupernatural "God") as their inner reality—and in that sense are identical. But until spirit's "self-realization" occurs, the subject (a human mind) mistakenly thinks every "object" it observes is something "alien", meaning something separate or apart from "subject". In Hegel's words, "The object is revealed to it [to "subject"] by [as] something alien, and it does not recognize itself." Self-realization occurs when the speculative philosopher (e.g., Hegel) arrives on the scene and realizes that every "object" is himself, because both subject and object are essentially spirit. When self-actualization is achieved and spirit knows itself absolutely, the "finite" human recognized itself as the "infinite" ("God", divine), replacing the supernatural God of "picture-thought" or "representation" [Vorstellung] characteristic of positive religion. Actual idealism is a form of idealism developed by Giovanni Gentile that grew into a "grounded" idealism contrasting Kant and Hegel. The idea is a version of Occam's razor; the simpler explanations are always correct. Actual idealism is the idea that reality is the ongoing act of thinking, or in Italian "pensiero pensante". Any action done by humans is classified as human thought because the action was done due to predisposed thought. He further believes that thoughts are the only concept that truly exist since reality is defined through the act of thinking. This idea was derived from Gentile's paper, "The Theory of Mind As Pure Act". Since thoughts are actions, any conjectured idea can be enacted. This idea not only affects the individual's life, but everyone around them, which in turn affects the state since the people are the state. Therefore, thoughts of each person are subsumed within the state. The state is a composition of many minds that come together to change the country for better or worse. Gentile theorizes that thoughts can only be conjectured within the bounds of known reality; abstract thinking does not exist. Thoughts cannot be formed outside our known reality because we are the reality that halt ourselves from thinking externally. With accordance to "The Act of Thought of Pure Thought", our actions comprise our thoughts, our thoughts create perception, perceptions define reality, thus we think within our created reality. The present act of thought is reality but the past is not reality; it is history. The reason being, past can be rewritten through present knowledge and perspective of the event. The reality that is currently constructed can be completely changed through language (e.g. bias (omission, source, tone)). The unreliability of the recorded reality can skew the original concept and make the past remark unreliable. Actual idealism is regarded as a liberal and tolerant doctrine since it acknowledges that every being picturizes reality, in which their ideas remained hatched, differently. Even though, reality is a figment of thought. Even though core concept of the theory is famous for its simplification, its application is regarded as extremely ambiguous. Over the years, philosophers have interpreted it numerously different ways: Holmes took it as metaphysics of the thinking act; Betti as a form of hermeneutics; Harris as a metaphysics of democracy; Fogu as a modernist philosophy of history. Giovanni Gentile was a key supporter of fascism, regarded by many as the "philosopher of fascism". Gentile's philosophy was the key to understating fascism as it was believed by many who supported and loved it. They believed, if priori synthesis of subject and object is true, there is no difference between the individuals in society; they're all one. Which means that they have equal right, roles, and jobs. In fascist state, submission is given to one leader because individuals act as one body. In Gentile's view, far more can be accomplished when individuals are under a corporate body than a collection of autonomous individuals. Pluralistic idealism takes the view that there are many individual minds that together underlie the existence of the observed world and make possible the existence of the physical universe. Pluralistic idealism does not assume the existence of a single ultimate mental reality. Personalism is the view that the minds that underlie reality are the minds of persons. Borden Parker Bowne, a philosopher at Boston University, a founder and popularizer of personal idealism, presented it as a substantive reality of persons, the only reality, as known directly in self-consciousness. Reality is a society of interacting persons dependent on the Supreme Person of God. Other proponents include George Holmes Howison and J. M. E. McTaggart. Howison's personal idealism was also called "California Personalism" by others to distinguish it from the "Boston Personalism" which was of Bowne. Howison maintained that both impersonal, monistic idealism and materialism run contrary to the experience of moral freedom. To deny freedom to pursue truth, beauty, and "benignant love" is to undermine every profound human venture, including science, morality, and philosophy. Personalistic idealists Borden Parker Bowne and Edgar S. Brightman and realistic (in some senses of the term, though he remained influenced by neoplatonism) personal theist Saint Thomas Aquinas address a core issue, namely that of dependence upon an infinite personal God. Howison, in his book The Limits of Evolution and Other Essays Illustrating the Metaphysical Theory of Personal Idealism, created a democratic notion of personal idealism that extended all the way to God, who was no more the ultimate monarch but the ultimate democrat in eternal relation to other eternal persons. J. M. E. McTaggart's idealist atheism and Thomas Davidson's apeirotheism resemble Howisons personal idealism. J. M. E. McTaggart argued that minds alone exist and only relate to each other through love. Space, time and material objects are unreal. In The Unreality of Time he argued that time is an illusion because it is impossible to produce a coherent account of a sequence of events. The Nature of Existence (1927) contained his arguments that space, time, and matter cannot possibly be real. In his Studies in Hegelian Cosmology, he declared that metaphysics are not relevant to social and political action. McTaggart "thought that Hegel was wrong in supposing that metaphysics could show that the state is more than a means to the good of the individuals who compose it". For McTaggart "philosophy can give us very little, if any, guidance in action... Why should a Hegelian citizen be surprised that his belief as to the organic nature of the Absolute does not help him in deciding how to vote? Would a Hegelian engineer be reasonable in expecting that his belief that all matter is spirit should help him in planning a bridge?" Thomas Davidson taught a philosophy called "apeirotheism", a "form of pluralistic idealism...coupled with a stern ethical rigorism" which he defined as "a theory of Gods infinite in number". The theory was indebted to Aristotle's pluralism and his concepts of Soul, the rational, living aspect of a living substance which cannot exist apart from the body because it is not a substance but an essence, and nous, rational thought, reflection and understanding. Although a perennial source of controversy, Aristotle arguably views the latter as both eternal and immaterial in nature, as exemplified in his theology of unmoved movers. Identifying Aristotle's God with rational thought, Davidson argued, contrary to Aristotle, that just as the soul cannot exist apart from the body, God cannot exist apart from the world. The English psychologist and philosopher James Ward inspired by Leibniz and panpsychism had also defended a form of pluralistic idealism. According to Ward the universe is composed of "psychic monads" of different levels, interacting for mutual self-betterment. Idealist notions took a strong hold among physicists of the early 20th century confronted with the paradoxes of quantum physics and the theory of relativity. In The Grammar of Science, Preface to the 2nd Edition, 1900, Karl Pearson wrote, "There are many signs that a sound idealism is surely replacing, as a basis for natural philosophy, the crude materialism of the older physicists." This book influenced Einstein's regard for the importance of the observer in scientific measurements. In § 5 of that book, Pearson asserted that "...science is in reality a classification and analysis of the contents of the mind..." Also, "...the field of science is much more consciousness than an external world." Arthur Eddington, a British astrophysicist of the early 20th century, wrote in his book The Nature of the Physical World that the stuff of the world is mind-stuff, adding that "The mind-stuff of the world is, of course, something more general than our individual conscious minds." Ian Barbour, in his book Issues in Science and Religion, cites Arthur Eddington's The Nature of the Physical World (1928) as a text that argues The Heisenberg Uncertainty Principles provides a scientific basis for "the defense of the idea of human freedom" and his Science and the Unseen World (1929) for support of philosophical idealism "the thesis that reality is basically mental." The physicist Sir James Jeans wrote: "The stream of knowledge is heading towards a non-mechanical reality; the Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter... we ought rather hail it as the creator and governor of the realm of matter." Jeans, in an interview published in The Observer (London), when asked the question: "Do you believe that life on this planet is the result of some sort of accident, or do you believe that it is a part of some great scheme?" replied, "I incline to the idealistic theory that consciousness is fundamental... In general the universe seems to me to be nearer to a great thought than to a great machine." The chemist Ernest Lester Smith, a member of the occult movement theosophy, wrote a book Intelligence Came First (1975) in which he claimed that consciousness is a fact of nature and that the cosmos is grounded in and pervaded by mind and intelligence. Bernard d'Espagnat, a French theoretical physicist best known for his work on the nature of reality, wrote a paper titled The Quantum Theory and Reality. According to the paper, "The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with quantum mechanics and with facts established by experiment.". In a Guardian article entitled "Quantum Weirdness: What We Call 'Reality' is Just a State of Mind", d'Espagnat wrote, "the basic components of objects – the particles, electrons, quarks etc. – cannot be thought of as 'self-existent'." He further writes that his research in quantum physics has led him to conclude that an "ultimate reality" exists, which is not embedded in space or time.
[ { "paragraph_id": 0, "text": "Idealism in philosophy, also known as philosophical idealism or metaphysical idealism, is the set of metaphysical perspectives asserting that, most fundamentally, reality is equivalent to mind, spirit, or consciousness; that reality is entirely a mental construct; or that ideas are the highest form of reality or have the greatest claim to being considered \"real\". The radical latter view is often first credited to the Ancient Greek philosopher Plato as part of a theory now known as Platonic idealism. Besides in Western philosophy, idealism also appears in some Indian philosophy, namely in Vedanta, one of the orthodox schools of Hindu philosophy, and in some streams of Buddhism.", "title": "" }, { "paragraph_id": 1, "text": "Epistemologically, idealism is accompanied by philosophical skepticism about the possibility of knowing the existence of any thing that is independent of the human mind. Ontologically, idealism asserts that the existence of things depends upon the human mind; thus, ontological idealism rejects the perspectives of physicalism and dualism, because neither perspective gives ontological priority to the human mind. In contrast to materialism, idealism asserts the primacy of consciousness as the origin and prerequisite of phenomena.", "title": "" }, { "paragraph_id": 2, "text": "During the European Enlightenment, certain qualified versions of idealism arose, such as George Berkeley's subjective idealism, which proposed that physical objects exist only to the extent that one perceives them and thus the physical world does not exist outside of a mind. According to Berkeley, who was an Anglican Bishop, a single eternal mind keeps all of physical reality stable, and this is God.", "title": "" }, { "paragraph_id": 3, "text": "By contrast, Immanuel Kant said that idealism \"does not concern the existence of things\", but that \"our modes of representation\" of things such as space and time are not \"determinations that belong to things in themselves\", but are essential features of the human mind. Thus, Kant's transcendental idealism proposes that objects of experience rely upon their existence in the human mind that perceives the objects, and that the nature of the object-in-itself is external to human experience, unable to be conceived without the application of categories, which give structure to the human experience of reality. Kant's philosophy would be reinterpreted by Arthur Schopenhauer and by German idealists such as J.G. Fichte, F.W.J. Schelling, and G.W.F. Hegel. This tradition, which emphasized the mental or \"ideal\" character of all phenomena, gave birth to idealistic and subjectivist schools ranging from British idealism to phenomenalism to existentialism.", "title": "" }, { "paragraph_id": 4, "text": "Indian philosophers proposed the earliest arguments that the world of experience is grounded in the mind's perception of the physical world. Hindu idealism gave panentheistic arguments for the existence of an all-pervading consciousness as the true nature, as the true grounding of reality. In contrast, the Yogācāra school, which arose within Mahayana Buddhism in India in the 4th century AD, based its \"mind-only\" idealism to a greater extent on phenomenological analyses of personal experience.", "title": "" }, { "paragraph_id": 5, "text": "Idealism as a philosophy came under heavy attack in the West at the turn of the 20th century. The most influential critics of both epistemological and ontological idealism were G. E. Moore and Bertrand Russell, but its critics also included the new realists. The attacks by Moore and Russell were so influential that even more than 100 years later \"any acknowledgment of idealistic tendencies is viewed in the English-speaking world with reservation.\" However, many aspects and paradigms of idealism did still have a large influence on subsequent philosophy.", "title": "" }, { "paragraph_id": 6, "text": "Idealism is a term with several related meanings. It comes via Latin idea from the Ancient Greek idea (ἰδέα) from idein (ἰδεῖν), meaning \"to see\". The term entered the English language by 1743. It was first used in the abstract metaphysical sense \"belief that reality is made up only of ideas\" by Christian Wolff in 1747. The term re-entered the English language in this abstract sense by 1796.", "title": "Definitions" }, { "paragraph_id": 7, "text": "In ordinary language, as when speaking of Woodrow Wilson's political idealism, it generally suggests the priority of ideals, principles, values, and goals over concrete realities. Idealists are understood to represent the world as it might or should be, unlike pragmatists, who focus on the world as it presently is. In the arts, similarly, idealism affirms imagination and attempts to realize a mental conception of beauty, a standard of perfection, juxtaposed to aesthetic naturalism and realism. The term idealism is also sometimes used in a sociological sense, which emphasizes how human ideas—especially beliefs and values—shape society.", "title": "Definitions" }, { "paragraph_id": 8, "text": "Metaphysical idealism is an ontological doctrine that holds that reality itself is incorporeal or experiential at its core. Beyond this, idealists disagree on which aspects of the mental are more basic. Platonic idealism affirms that abstractions are more basic to reality than the things we perceive, while subjective idealists and phenomenalists tend to privilege sensory experience over abstract reasoning. Epistemological idealism is the view that reality can only be known through ideas, that only psychological experience can be apprehended by the mind.", "title": "Definitions" }, { "paragraph_id": 9, "text": "Subjective idealists like George Berkeley are anti-realists in terms of a mind-independent world. However, not all idealists restrict the real or the knowable to our immediate subjective experience. Objective idealists make claims about a transempirical world, but simply deny that this world is essentially divorced from or ontologically prior to the mental. Thus, Plato affirms an objective and knowable reality transcending our subjective awareness—a rejection of epistemological idealism—but proposes that this reality is grounded in ideal entities, a form of metaphysical idealism. Nor do all metaphysical idealists agree on the nature of the ideal; for instance, according to Plato, the fundamental entities were non-mental abstract forms.", "title": "Definitions" }, { "paragraph_id": 10, "text": "As a rule, transcendental idealists like Kant affirm idealism's epistemic side without committing themselves to whether reality is ultimately mental; objective idealists like Plato affirm reality's metaphysical basis in the mental or abstract without restricting their epistemology to ordinary experience; and subjective idealists like Berkeley affirm both metaphysical and epistemological idealism.", "title": "Definitions" }, { "paragraph_id": 11, "text": "Idealism as a form of metaphysical monism holds that consciousness, not matter, is the ground of all being. It is monist because it holds that there is only one type of thing in the universe and idealist because it holds that one thing to be consciousness.", "title": "Classical idealism" }, { "paragraph_id": 12, "text": "Anaxagoras (480 BC) taught that \"all things\" were created by Nous (\"Mind\"). He held that Mind held the cosmos together and gave human beings a connection to the cosmos or a pathway to the divine.", "title": "Classical idealism" }, { "paragraph_id": 13, "text": "Plato's theory of forms or \"ideas\" describes ideal forms (for example the platonic solids in geometry or abstracts like Goodness and Justice), as universals existing independently of any particular instance. Arne Grøn calls this doctrine \"the classic example of a metaphysical idealism as a transcendent idealism\", while Simone Klein calls Plato \"the earliest representative of metaphysical objective idealism.\" Nevertheless, Plato holds that matter is real, though transitory and imperfect, and is perceived by our body and its senses and given existence by the eternal ideas that are perceived directly by our rational soul. Plato was therefore a metaphysical and epistemological dualist, an outlook that modern idealism has striven to avoid: Plato's thought cannot therefore be counted as idealist in the modern sense.", "title": "Classical idealism" }, { "paragraph_id": 14, "text": "With the neoplatonist Plotinus, wrote Nathaniel Alfred Boll \"there even appears, probably for the first time in Western philosophy, idealism that had long been current in the East even at that time, for it taught... that the soul has made the world by stepping from eternity into time...\" Similarly, in regard to passages from the Enneads, \"The only space or place of the world is the soul\" and \"Time must not be assumed to exist outside the soul.\" Ludwig Noiré wrote: \"For the first time in Western philosophy we find idealism proper in Plotinus\". However, Plotinus does not address whether we know external objects, unlike Schopenhauer and other modern philosophers.", "title": "Classical idealism" }, { "paragraph_id": 15, "text": "Christian theologians have held idealist views, often based on neoplatonism, despite the influence of Aristotelian scholasticism from the 12th century onward. However, there is certainly a sense in which the scholastics retained the idealism that came via Augustine right back to Plato.", "title": "Christian philosophy" }, { "paragraph_id": 16, "text": "Later western theistic idealism such as that of Hermann Lotze offers a theory of the \"world ground\" in which all things find their unity: it has been widely accepted by Protestant theologians. Several modern religious movements, for example the organizations within the New Thought Movement and the Unity Church, may be said to have a particularly idealist orientation. The theology of Christian Science includes a form of idealism: it teaches that all that truly exists is God and God's ideas; that the world as it appears to the senses is a distortion of the underlying spiritual reality, a distortion that may be corrected (both conceptually and in terms of human experience) through a reorientation (spiritualization) of thought.", "title": "Christian philosophy" }, { "paragraph_id": 17, "text": "Wang Yangming, a Ming Chinese neo-Confucian philosopher, official, educationist, calligraphist and general, held that objects do not exist entirely apart from the mind because the mind shapes them. It is not the world that shapes the mind but the mind that gives reason to the world, so the mind alone is the source of all reason, having an inner light, an innate moral goodness and understanding of what is good.", "title": "Chinese philosophy" }, { "paragraph_id": 18, "text": "There are currents of idealism throughout Indian philosophy, ancient and modern. Hindu idealism often takes the form of monism or non-dualism, espousing the view that a unitary consciousness is the essence or meaning of the phenomenal reality and plurality.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 19, "text": "Buddhist idealism on the other hand is more epistemic and is not a metaphysical monism, which Buddhists consider eternalistic and hence not the Middle Way between extremes espoused by the Buddha.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 20, "text": "The oldest reference to Idealism in Vedic texts is in Purusha Sukta of the Rig Veda. This Sukta espouses panentheism by presenting cosmic being Purusha as both pervading all universe and yet being transcendent to it. Absolute idealism can be seen in Chāndogya Upaniṣad, where things of the objective world like the five elements and the subjective world such as will, hope, memory etc. are seen to be emanations from the Self.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 21, "text": "Idealist notions have been propounded by the Vedanta schools of thought, which use the Vedas, especially the Upanishads as their key texts. Idealism was opposed by dualists Samkhya, the atomists Vaisheshika, the logicians Nyaya, the linguists Mimamsa and the materialists Cārvāka. There are various sub schools of Vedanta, like Advaita Vedanta (non-dual), Vishishtadvaita and Bhedabheda Vedanta (difference and non-difference).", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 22, "text": "The schools of Vedanta all attempt to explain the nature and relationship of Brahman (universal soul or Self) and Atman (individual self), which they see as the central topic of the Vedas. One of the earliest attempts at this was Bādarāyaņa's Brahma Sutras, which is canonical for all Vedanta sub-schools. Advaita Vedanta is a major sub school of Vedanta which holds a non-dual Idealistic metaphysics. According to Advaita thinkers like Adi Shankara (788–820) and his contemporary Maṇḍana Miśra, Brahman, the single unitary consciousness or absolute awareness, appears as the diversity of the world because of maya or illusion, and hence perception of plurality is mithya, error. The world and all beings or souls in it have no separate existence from Brahman, universal consciousness, and the seemingly independent soul (jiva) is identical to Brahman. These doctrines are represented in verses such as brahma satyam jagan mithya; jīvo brahmaiva na aparah (Brahman is alone True, and this world of plurality is an error; the individual self is not different from Brahman). Other forms of Vedanta like the Vishishtadvaita of Ramanuja and the Bhedabheda of Bhāskara are not as radical in their non-dualism, accepting that there is a certain difference between individual souls and Brahman. Dvaita school of Vedanta by Madhvacharya maintains the opposing view that the world is real and eternal. It also argues that real Atman fully depends on the reflection of independent Brahman.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 23, "text": "The Tantric tradition of Kashmir Shaivism has also been categorized by scholars as a form of Idealism. The key thinker of this tradition is the Kashmirian Abhinavagupta (975–1025 CE).", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 24, "text": "Modern Vedic Idealism was defended by the influential Indian philosopher Sarvepalli Radhakrishnan in his 1932 An Idealist View of Life and other works, which espouse Advaita Vedanta. The essence of Hindu Idealism is captured by such modern writers as Sri Nisargadatta Maharaj, Sri Aurobindo, P. R. Sarkar, and Sohail Inayatullah.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 25, "text": "Buddhist views which can be said to be similar to Idealism appear in Mahayana Buddhist texts such as the Samdhinirmocana sutra, Laṅkāvatāra Sūtra, Dashabhumika sutra, etc. These were later expanded upon by Indian Buddhist philosophers of the influential Yogacara school, like Vasubandhu, Asaṅga, Dharmakīrti, and Śāntarakṣita. Yogacara thought was also promoted in China by Chinese philosophers and translators like Xuanzang.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 26, "text": "There is a modern scholarly disagreement about whether Yogacara Buddhism can be said to be a form of idealism. As Saam Trivedi notes: \"on one side of the debate, writers such as Jay Garfield, Jeffrey Hopkins, Paul Williams, and others maintain the idealism label, while on the other side, Stefan Anacker, Dan Lusthaus, Richard King, Thomas Kochumuttom, Alex Wayman, Janice Dean Willis, and others have argued that Yogacara is not idealist.\" The central point of issue is what Buddhist philosophers like Vasubandhu who used the term vijñapti-matra (\"representation-only\" or \"cognition-only\") and formulated arguments to refute external objects actually meant to say.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 27, "text": "Vasubandhu's works include a refutation of external objects or externality itself and argues that the true nature of reality is beyond subject-object distinctions. He views ordinary consciousness experience as deluded in its perceptions of an external world separate from itself and instead argues that all there is Vijñapti (representation or conceptualization). Hence Vasubandhu begins his Vimsatika with the verse: All this is consciousness-only, because of the appearance of non-existent objects, just as someone with an optical disorder may see non-existent nets of hair.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 28, "text": "Likewise, the Buddhist philosopher Dharmakirti's view of the apparent existence of external objects is summed up by him in the Pramānaṿārttika ('Commentary on Logic and Epistemology'): Cognition experiences itself, and nothing else whatsoever. Even the particular objects of perception, are by nature just consciousness itself.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 29, "text": "While some writers like Jay Garfield hold that Vasubandhu is a metaphysical idealist, others see him as closer to an epistemic idealist like Kant who holds that our knowledge of the world is simply knowledge of our own concepts and perceptions of a transcendental world. Sean Butler upholding that Yogacara is a form of idealism, albeit its own unique type, notes the similarity of Kant's categories and Yogacara's Vāsanās, both of which are simply phenomenal tools with which the mind interprets the noumenal realm. Unlike Kant however who holds that the noumenon or thing-in-itself is unknowable to us, Vasubandhu holds that ultimate reality is knowable, but only through non-conceptual yogic perception of a highly trained meditative mind.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 30, "text": "Writers like Dan Lusthaus who hold that Yogacara is not a metaphysical idealism point out, for example, that Yogācāra thinkers did not focus on consciousness to assert it as ontologically real, but simply to analyze how our experiences and thus our suffering is created. As Lusthaus notes: \"no Indian Yogācāra text ever claims that the world is created by mind. What they do claim is that we mistake our projected interpretations of the world for the world itself, i.e. we take our own mental constructions to be the world.\" Lusthaus notes that there are similarities to Western epistemic idealists like Kant and Husserl, enough so that Yogacara can be seen as a form of epistemological idealism. However he also notes key differences like the concepts of karma and nirvana. Saam Trivedi meanwhile notes the similarities between epistemic idealism and Yogacara, but adds that Yogacara Buddhism is in a sense its own theory.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 31, "text": "Similarly, Thomas Kochumuttom sees Yogacara as \"an explanation of experience, rather than a system of ontology\" and Stefan Anacker sees Vasubandhu's philosophy as a form of psychology and as a mainly therapeutic enterprise.", "title": "Idealism in Vedic and Buddhist thought" }, { "paragraph_id": 32, "text": "Subjective idealism (also known as immaterialism) describes a relationship between experience and the world in which objects are no more than collections or bundles of sense data in the perceiver. Proponents include Berkeley, Bishop of Cloyne, an Anglo-Irish philosopher who advanced a theory he called \"immaterialism\", later referred to as \"subjective idealism\", contending that individuals can only know sensations and ideas of objects directly, not abstractions such as \"matter\", and that ideas also depend upon being perceived for their very existence - esse est percipi; \"to be is to be perceived\".", "title": "Subjective idealism" }, { "paragraph_id": 33, "text": "Arthur Collier published similar assertions though there seems to have been no influence between the two contemporary writers. The only knowable reality is the represented image of an external object. Matter as a cause of that image, is unthinkable and therefore nothing to us. An external world as absolute matter unrelated to an observer does not exist as far as we are concerned. The universe cannot exist as it appears if there is no perceiving mind. Collier was influenced by An Essay Towards the Theory of the Ideal or Intelligible World by Cambridge Platonist John Norris (1701).", "title": "Subjective idealism" }, { "paragraph_id": 34, "text": "Paul Brunton, a British philosopher, mystic, traveler, and guru, taught a type of idealism called \"mentalism\", similar to that of Bishop Berkeley, proposing a master world-image, projected or manifested by a world-mind, and an infinite number of individual minds participating. A tree does not cease to exist if nobody sees it because the world-mind is projecting the idea of the tree to all minds.", "title": "Subjective idealism" }, { "paragraph_id": 35, "text": "Epistemological idealism is a subjectivist position in epistemology that holds that what one knows about an object exists only in one's mind. Proponents include Brand Blanshard.", "title": "Subjective idealism" }, { "paragraph_id": 36, "text": "A. A. Luce and John Foster are other subjectivists. Luce, in Sense without Matter (1954), attempts to bring Berkeley up to date by modernizing his vocabulary and putting the issues he faced in modern terms, and treats the Biblical account of matter and the psychology of perception and nature. Foster's The Case for Idealism argues that the physical world is the logical creation of natural, non-logical constraints on human sense-experience. Foster's latest defense of his views (phenomenalistic idealism) is in his book A World for Us: The Case for Phenomenalistic Idealism.", "title": "Subjective idealism" }, { "paragraph_id": 37, "text": "Critics of subjective idealism include Bertrand Russell's popular 1912 book The Problems of Philosophy, Australian philosopher David Stove, Alan Musgrave, and John Searle.", "title": "Subjective idealism" }, { "paragraph_id": 38, "text": "Transcendental idealism is a doctrine founded by Immanuel Kant in the eighteenth century. It maintains that we cannot known more of reality that its appearance to us, that is, according to our capacities of sensibility and understanding.", "title": "Transcendental idealism" }, { "paragraph_id": 39, "text": "His Critique of Pure Reason (2nd ed.) contains a section entitled \"Refutation of Idealism\", which distinguishes transcendental idealism from Descartes's sceptical idealism and Berkeley's anti-realist strain of subjective idealism. The section \"Paralogisms of Pure Reason\" is also an implicit critique of Descartes's idealism. Kant says that it is not possible to infer the \"I\" as an object (Descartes' cogito ergo sum) purely from \"the spontaneity of thought\". Kant focused on ideas drawn from British philosophers such as Locke, Berkeley and Hume, but distinguished his transcendental or critical idealism from previous varieties:", "title": "Transcendental idealism" }, { "paragraph_id": 40, "text": "The thesis of all genuine idealists, from the Eleatic School up to Bishop Berkeley, is contained in this formula: \"All cognition through the senses and experience is nothing but sheer illusion, and there is truth only in the ideas of pure understanding and reason.\" The principle that governs and determines my idealism throughout is, on the contrary: \"All cognition of things out of mere pure understanding or pure reason is nothing but sheer illusion, and there is truth only in experience.\"", "title": "Transcendental idealism" }, { "paragraph_id": 41, "text": "According to the version of transcendental idealism propounded by Arthur Schopenhauer, mental pictures are what constitute subjective knowledge. The ideal, for him, is what can be attributed to our own minds; the images in our head are what comprise the ideal. This is to say that we are restricted to our own consciousness. The world that appears is only a representation, which is all that we directly and immediately know. All objects that are external to the mind are known indirectly. In his own words, \"the objective, as such, always and essentially has its existence in the consciousness of a subject; it is therefore the subject's representation, and consequently is conditioned by the subject, and moreover by the subject's forms of representation, which belong to the subject and not to the object.\"", "title": "Transcendental idealism" }, { "paragraph_id": 42, "text": "Charles Bernard Renouvier was the first philosopher in France to formulate a complete idealistic system since Nicolas Malebranche. His system is based on Immanuel Kant's, as his chosen term \"néo-criticisme\" indicates. It is a transformation rather than a continuation of Kantianism.", "title": "Transcendental idealism" }, { "paragraph_id": 43, "text": "Objective idealism asserts that the reality of experiencing combines and transcends the realities of the object experienced and of the mind of the observer. Proponents include Thomas Hill Green, Josiah Royce, Benedetto Croce, and Charles Sanders Peirce.", "title": "Objective idealism" }, { "paragraph_id": 44, "text": "F. W. J. Schelling (1775–1854) claimed that J. G. Fichte's \"I\" needs the Not-I, because there is no subject without object, and vice versa. So there is no difference between the subjective and the objective, that is, the ideal and the real. This is Schelling's \"absolute identity\": the ideas or mental images in the mind are identical to the extended objects which are external to the mind.", "title": "Objective idealism" }, { "paragraph_id": 45, "text": "Absolute idealism is G. W. F. Hegel's account of how existence is comprehensible as an all-inclusive whole. Hegel called his philosophy \"absolute\" idealism in contrast to the \"subjective idealism\" of Berkeley and the \"transcendental idealism\" of Kant and Fichte. The exercise of reason and intellect enables the philosopher to know ultimate historical reality, the phenomenological constitution of self-determination, the dialectical development of self-awareness in the realm of history.", "title": "Objective idealism" }, { "paragraph_id": 46, "text": "In his Science of Logic (1812–1814) Hegel argues that finite qualities are not fully \"real\" because they depend on other finite qualities to determine them. Qualitative infinity, on the other hand, would be more self-determining and hence more fully real. Similarly finite natural things are less \"real\"—because they are less self-determining—than spiritual things like morally responsible people, ethical communities and God. So any doctrine, such as materialism, that asserts that finite qualities or natural objects are fully real is mistaken.", "title": "Objective idealism" }, { "paragraph_id": 47, "text": "Hegel certainly intends to preserve what he takes to be true of German Idealism, in particular Kant's insistence that ethical reason can and does go beyond finite inclinations. For Hegel there must be some identity of thought and being for the \"subject\" (any human observer) to be able to know any observed \"object\" (any external entity, possibly even another human) at all. Under Hegel's concept of \"subject-object identity\", subject and object both have spirit (Hegel's ersatz, redefined, nonsupernatural \"God\") as their inner reality—and in that sense are identical. But until spirit's \"self-realization\" occurs, the subject (a human mind) mistakenly thinks every \"object\" it observes is something \"alien\", meaning something separate or apart from \"subject\". In Hegel's words, \"The object is revealed to it [to \"subject\"] by [as] something alien, and it does not recognize itself.\" Self-realization occurs when the speculative philosopher (e.g., Hegel) arrives on the scene and realizes that every \"object\" is himself, because both subject and object are essentially spirit. When self-actualization is achieved and spirit knows itself absolutely, the \"finite\" human recognized itself as the \"infinite\" (\"God\", divine), replacing the supernatural God of \"picture-thought\" or \"representation\" [Vorstellung] characteristic of positive religion.", "title": "Objective idealism" }, { "paragraph_id": 48, "text": "Actual idealism is a form of idealism developed by Giovanni Gentile that grew into a \"grounded\" idealism contrasting Kant and Hegel. The idea is a version of Occam's razor; the simpler explanations are always correct. Actual idealism is the idea that reality is the ongoing act of thinking, or in Italian \"pensiero pensante\". Any action done by humans is classified as human thought because the action was done due to predisposed thought. He further believes that thoughts are the only concept that truly exist since reality is defined through the act of thinking. This idea was derived from Gentile's paper, \"The Theory of Mind As Pure Act\".", "title": "Objective idealism" }, { "paragraph_id": 49, "text": "Since thoughts are actions, any conjectured idea can be enacted. This idea not only affects the individual's life, but everyone around them, which in turn affects the state since the people are the state. Therefore, thoughts of each person are subsumed within the state. The state is a composition of many minds that come together to change the country for better or worse.", "title": "Objective idealism" }, { "paragraph_id": 50, "text": "Gentile theorizes that thoughts can only be conjectured within the bounds of known reality; abstract thinking does not exist. Thoughts cannot be formed outside our known reality because we are the reality that halt ourselves from thinking externally. With accordance to \"The Act of Thought of Pure Thought\", our actions comprise our thoughts, our thoughts create perception, perceptions define reality, thus we think within our created reality.", "title": "Objective idealism" }, { "paragraph_id": 51, "text": "The present act of thought is reality but the past is not reality; it is history. The reason being, past can be rewritten through present knowledge and perspective of the event. The reality that is currently constructed can be completely changed through language (e.g. bias (omission, source, tone)). The unreliability of the recorded reality can skew the original concept and make the past remark unreliable. Actual idealism is regarded as a liberal and tolerant doctrine since it acknowledges that every being picturizes reality, in which their ideas remained hatched, differently. Even though, reality is a figment of thought.", "title": "Objective idealism" }, { "paragraph_id": 52, "text": "Even though core concept of the theory is famous for its simplification, its application is regarded as extremely ambiguous. Over the years, philosophers have interpreted it numerously different ways: Holmes took it as metaphysics of the thinking act; Betti as a form of hermeneutics; Harris as a metaphysics of democracy; Fogu as a modernist philosophy of history.", "title": "Objective idealism" }, { "paragraph_id": 53, "text": "Giovanni Gentile was a key supporter of fascism, regarded by many as the \"philosopher of fascism\". Gentile's philosophy was the key to understating fascism as it was believed by many who supported and loved it. They believed, if priori synthesis of subject and object is true, there is no difference between the individuals in society; they're all one. Which means that they have equal right, roles, and jobs. In fascist state, submission is given to one leader because individuals act as one body. In Gentile's view, far more can be accomplished when individuals are under a corporate body than a collection of autonomous individuals.", "title": "Objective idealism" }, { "paragraph_id": 54, "text": "Pluralistic idealism takes the view that there are many individual minds that together underlie the existence of the observed world and make possible the existence of the physical universe. Pluralistic idealism does not assume the existence of a single ultimate mental reality.", "title": "Objective idealism" }, { "paragraph_id": 55, "text": "Personalism is the view that the minds that underlie reality are the minds of persons. Borden Parker Bowne, a philosopher at Boston University, a founder and popularizer of personal idealism, presented it as a substantive reality of persons, the only reality, as known directly in self-consciousness. Reality is a society of interacting persons dependent on the Supreme Person of God. Other proponents include George Holmes Howison and J. M. E. McTaggart.", "title": "Objective idealism" }, { "paragraph_id": 56, "text": "Howison's personal idealism was also called \"California Personalism\" by others to distinguish it from the \"Boston Personalism\" which was of Bowne. Howison maintained that both impersonal, monistic idealism and materialism run contrary to the experience of moral freedom. To deny freedom to pursue truth, beauty, and \"benignant love\" is to undermine every profound human venture, including science, morality, and philosophy. Personalistic idealists Borden Parker Bowne and Edgar S. Brightman and realistic (in some senses of the term, though he remained influenced by neoplatonism) personal theist Saint Thomas Aquinas address a core issue, namely that of dependence upon an infinite personal God.", "title": "Objective idealism" }, { "paragraph_id": 57, "text": "Howison, in his book The Limits of Evolution and Other Essays Illustrating the Metaphysical Theory of Personal Idealism, created a democratic notion of personal idealism that extended all the way to God, who was no more the ultimate monarch but the ultimate democrat in eternal relation to other eternal persons. J. M. E. McTaggart's idealist atheism and Thomas Davidson's apeirotheism resemble Howisons personal idealism.", "title": "Objective idealism" }, { "paragraph_id": 58, "text": "J. M. E. McTaggart argued that minds alone exist and only relate to each other through love. Space, time and material objects are unreal. In The Unreality of Time he argued that time is an illusion because it is impossible to produce a coherent account of a sequence of events. The Nature of Existence (1927) contained his arguments that space, time, and matter cannot possibly be real. In his Studies in Hegelian Cosmology, he declared that metaphysics are not relevant to social and political action. McTaggart \"thought that Hegel was wrong in supposing that metaphysics could show that the state is more than a means to the good of the individuals who compose it\". For McTaggart \"philosophy can give us very little, if any, guidance in action... Why should a Hegelian citizen be surprised that his belief as to the organic nature of the Absolute does not help him in deciding how to vote? Would a Hegelian engineer be reasonable in expecting that his belief that all matter is spirit should help him in planning a bridge?\"", "title": "Objective idealism" }, { "paragraph_id": 59, "text": "Thomas Davidson taught a philosophy called \"apeirotheism\", a \"form of pluralistic idealism...coupled with a stern ethical rigorism\" which he defined as \"a theory of Gods infinite in number\". The theory was indebted to Aristotle's pluralism and his concepts of Soul, the rational, living aspect of a living substance which cannot exist apart from the body because it is not a substance but an essence, and nous, rational thought, reflection and understanding. Although a perennial source of controversy, Aristotle arguably views the latter as both eternal and immaterial in nature, as exemplified in his theology of unmoved movers. Identifying Aristotle's God with rational thought, Davidson argued, contrary to Aristotle, that just as the soul cannot exist apart from the body, God cannot exist apart from the world.", "title": "Objective idealism" }, { "paragraph_id": 60, "text": "The English psychologist and philosopher James Ward inspired by Leibniz and panpsychism had also defended a form of pluralistic idealism. According to Ward the universe is composed of \"psychic monads\" of different levels, interacting for mutual self-betterment.", "title": "Objective idealism" }, { "paragraph_id": 61, "text": "Idealist notions took a strong hold among physicists of the early 20th century confronted with the paradoxes of quantum physics and the theory of relativity. In The Grammar of Science, Preface to the 2nd Edition, 1900, Karl Pearson wrote, \"There are many signs that a sound idealism is surely replacing, as a basis for natural philosophy, the crude materialism of the older physicists.\" This book influenced Einstein's regard for the importance of the observer in scientific measurements. In § 5 of that book, Pearson asserted that \"...science is in reality a classification and analysis of the contents of the mind...\" Also, \"...the field of science is much more consciousness than an external world.\"", "title": "Objective idealism" }, { "paragraph_id": 62, "text": "Arthur Eddington, a British astrophysicist of the early 20th century, wrote in his book The Nature of the Physical World that the stuff of the world is mind-stuff, adding that \"The mind-stuff of the world is, of course, something more general than our individual conscious minds.\" Ian Barbour, in his book Issues in Science and Religion, cites Arthur Eddington's The Nature of the Physical World (1928) as a text that argues The Heisenberg Uncertainty Principles provides a scientific basis for \"the defense of the idea of human freedom\" and his Science and the Unseen World (1929) for support of philosophical idealism \"the thesis that reality is basically mental.\"", "title": "Objective idealism" }, { "paragraph_id": 63, "text": "The physicist Sir James Jeans wrote: \"The stream of knowledge is heading towards a non-mechanical reality; the Universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter... we ought rather hail it as the creator and governor of the realm of matter.\" Jeans, in an interview published in The Observer (London), when asked the question: \"Do you believe that life on this planet is the result of some sort of accident, or do you believe that it is a part of some great scheme?\" replied, \"I incline to the idealistic theory that consciousness is fundamental... In general the universe seems to me to be nearer to a great thought than to a great machine.\"", "title": "Objective idealism" }, { "paragraph_id": 64, "text": "The chemist Ernest Lester Smith, a member of the occult movement theosophy, wrote a book Intelligence Came First (1975) in which he claimed that consciousness is a fact of nature and that the cosmos is grounded in and pervaded by mind and intelligence.", "title": "Objective idealism" }, { "paragraph_id": 65, "text": "Bernard d'Espagnat, a French theoretical physicist best known for his work on the nature of reality, wrote a paper titled The Quantum Theory and Reality. According to the paper, \"The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with quantum mechanics and with facts established by experiment.\". In a Guardian article entitled \"Quantum Weirdness: What We Call 'Reality' is Just a State of Mind\", d'Espagnat wrote, \"the basic components of objects – the particles, electrons, quarks etc. – cannot be thought of as 'self-existent'.\" He further writes that his research in quantum physics has led him to conclude that an \"ultimate reality\" exists, which is not embedded in space or time.", "title": "Objective idealism" } ]
Idealism in philosophy, also known as philosophical idealism or metaphysical idealism, is the set of metaphysical perspectives asserting that, most fundamentally, reality is equivalent to mind, spirit, or consciousness; that reality is entirely a mental construct; or that ideas are the highest form of reality or have the greatest claim to being considered "real". The radical latter view is often first credited to the Ancient Greek philosopher Plato as part of a theory now known as Platonic idealism. Besides in Western philosophy, idealism also appears in some Indian philosophy, namely in Vedanta, one of the orthodox schools of Hindu philosophy, and in some streams of Buddhism. Epistemologically, idealism is accompanied by philosophical skepticism about the possibility of knowing the existence of any thing that is independent of the human mind. Ontologically, idealism asserts that the existence of things depends upon the human mind; thus, ontological idealism rejects the perspectives of physicalism and dualism, because neither perspective gives ontological priority to the human mind. In contrast to materialism, idealism asserts the primacy of consciousness as the origin and prerequisite of phenomena. During the European Enlightenment, certain qualified versions of idealism arose, such as George Berkeley's subjective idealism, which proposed that physical objects exist only to the extent that one perceives them and thus the physical world does not exist outside of a mind. According to Berkeley, who was an Anglican Bishop, a single eternal mind keeps all of physical reality stable, and this is God. By contrast, Immanuel Kant said that idealism "does not concern the existence of things", but that "our modes of representation" of things such as space and time are not "determinations that belong to things in themselves", but are essential features of the human mind. Thus, Kant's transcendental idealism proposes that objects of experience rely upon their existence in the human mind that perceives the objects, and that the nature of the object-in-itself is external to human experience, unable to be conceived without the application of categories, which give structure to the human experience of reality. Kant's philosophy would be reinterpreted by Arthur Schopenhauer and by German idealists such as J.G. Fichte, F.W.J. Schelling, and G.W.F. Hegel. This tradition, which emphasized the mental or "ideal" character of all phenomena, gave birth to idealistic and subjectivist schools ranging from British idealism to phenomenalism to existentialism. Indian philosophers proposed the earliest arguments that the world of experience is grounded in the mind's perception of the physical world. Hindu idealism gave panentheistic arguments for the existence of an all-pervading consciousness as the true nature, as the true grounding of reality. In contrast, the Yogācāra school, which arose within Mahayana Buddhism in India in the 4th century AD, based its "mind-only" idealism to a greater extent on phenomenological analyses of personal experience. Idealism as a philosophy came under heavy attack in the West at the turn of the 20th century. The most influential critics of both epistemological and ontological idealism were G. E. Moore and Bertrand Russell, but its critics also included the new realists. The attacks by Moore and Russell were so influential that even more than 100 years later "any acknowledgment of idealistic tendencies is viewed in the English-speaking world with reservation." However, many aspects and paradigms of idealism did still have a large influence on subsequent philosophy.
2001-12-20T19:28:04Z
2023-12-19T20:24:17Z
[ "Template:Other uses", "Template:Page needed", "Template:Wiktionary", "Template:Use dmy dates", "Template:Reflist", "Template:Cite book", "Template:Cite SEP", "Template:Commons category", "Template:About", "Template:EngvarB", "Template:Asian philosophy sidebar", "Template:Main", "Template:Cite web", "Template:Cite EB1911", "Template:Idealism", "Template:Metaphysics", "Template:Philosophy topics", "Template:Authority control", "Template:Blockquote", "Template:Citation", "Template:Cite encyclopedia", "Template:Literally", "Template:Cite news", "Template:ISBN", "Template:Cite IEP", "Template:Philosophy of mind", "Template:Short description", "Template:Webarchive", "Template:Cite journal", "Template:PhilPapers", "Template:InPho" ]
https://en.wikipedia.org/wiki/Idealism
15,430
Inheritance
Inheritance is the practice of receiving private property, titles, debts, entitlements, privileges, rights, and obligations upon the death of an individual. The rules of inheritance differ among societies and have changed over time. Officially bequeathing private property and/or debts can be performed by a testator via will, as attested by a notary or by other lawful means. In law, an "heir" (FEM: heiress) is a person who is entitled to receive a share of the deceased's (the person who died) property, subject to the rules of inheritance in the jurisdiction of which the deceased was a citizen or where the deceased (decedent) died or owned property at the time of death. The inheritance may be either under the terms of a will or by intestate laws if the deceased had no will. However, the will must comply with the laws of the jurisdiction at the time it was created or it will be declared invalid (for example, some states do not recognise handwritten wills as valid, or only in specific circumstances) and the intestate laws then apply. The exclusion from inheritance of a person who was an heir in a previous will, or would otherwise be expected to inherit, is termed "disinheritance". A person does not become an heir, since the exact identity of the persons entitled to inherit is determined only then. Members of ruling noble or royal houses who are expected to become heirs are called heirs apparent if first in line and incapable of being displaced from inheriting by another claim; otherwise, they are heirs presumptive. There is a further concept of joint inheritance, pending renunciation by all but one, which is called coparceny. In modern law, the terms inheritance and heir refer exclusively to succession to property by descent from a deceased dying intestate. Takers in property succeeded to under a will are termed generally beneficiaries, and specifically devises for real property, bequests for personal property (except money), or legatees for money. Except in some jurisdictions where a person cannot be legally disinherited (such as the United States state of Louisiana, which allows disinheritance only under specifically enumerated circumstances), a person who would be an heir under intestate laws may be disinherited completely under the terms of a will (an example is that of the will of comedian Jerry Lewis; his will specifically disinherited his six children by his first wife, and their descendants, leaving his entire estate to his second wife). Detailed anthropological and sociological studies have been made about customs of patrimonial inheritance, where only male children can inherit. Some cultures also employ matrilineal succession, where property can only pass along the female line, most commonly going to the sister's sons of the decedent; but also, in some societies, from the mother to her daughters. Some ancient societies and most modern states employ egalitarian inheritance, without discrimination based on gender and/or birth order. The inheritance is patrimonial. The father —that is, the owner of the land— bequeaths only to his male descendants, so the Promised Land passes from one Jewish father to his sons. According to the Law of Moses, the firstborn son was entitled to receive twice as much of his father's inheritance as the other sons (Deuteronomy 21:15–17). If there were no living sons and no descendants of any previously living sons, daughters inherit. In Numbers 27, the five daughters of Zelophehad come to Moses and ask for their father's inheritance, as they have no brothers. The order of inheritance is set out: a man's sons inherit first, daughters if no sons, brothers if he has no children, and so on. Later, in Numbers 36, some of the heads of the families of the tribe of Manasseh come to Moses and point out that, if a daughter inherits and then marries a man not from her paternal tribe, her land will pass from her birth-tribe's inheritance into her marriage-tribe's. So a further rule is laid down: if a daughter inherits land, she must marry someone within her father's tribe. (The daughters of Zelophehad marry the sons' of their father's brothers. There is no indication that this was not their choice.) The laws of Jewish inheritance are discussed in the Talmud, in the Mishneh Torah and by Saadiah ben Joseph among other sources. All these sources agree that the firstborn son is entitled to a double portion of his father's estate. This means that, for example, if a father left five sons, the firstborn receives a third of the estate and each of the other four receives a sixth. If he left nine sons, the firstborn receives a fifth and each of the other eight receive a tenth. If the eldest surviving son is not the firstborn son, he is not entitled to the double portion. Philo of Alexandria and Josephus also comment on the Jewish laws of inheritance, praising them above other law codes of their time. They also agreed that the firstborn son must receive a double portion of his father's estate. At first, Christianity did not have its own inheritance traditions distinct from Judaism. With the accession of Emperor Constantine in 306, Christians both began to distance themselves from Judaism and to have influence on the law and practices of secular institutions. From the beginning, this included inheritance. The Roman practice of adoption was a specific target, because it was perceived to be in conflict with the Judeo-Christian doctrine of primogeniture. As Stephanie Coontz documents in Marriage, a History (Penguin, 2006), not only succession but the whole constellation of rights and practices that included marriage, adoption, legitimacy, consanguinity, and inheritance changed in Western Europe from a Greco-Roman model to a Judeo-Christian pattern, based on Biblical and traditional Judeo-Christian principles. The transformation was essentially complete in the Middle Ages, although in English-speaking countries there was additional development under the influence of Protestantism. Even when Europe became secularized and Christianity faded into the background, the legal foundation Christendom had laid remained. Only in the era of modern jurisprudence have there been significant changes. The Quran introduced a number of different rights and restrictions on matters of inheritance, including general improvements to the treatment of women and family life compared to the pre-Islamic societies that existed in the Arabian Peninsula at the time. Furthermore, the Quran introduced additional heirs that were not entitled to inheritance in pre-Islamic times, mentioning nine relatives specifically of which six were female and three were male. However, the inheritance rights of women remained inferior to those of men because in Islam someone always has a responsibility of looking after a woman's expenses. According to Quran 4:11, for example, a son is entitled to twice as much inheritance as a daughter. The Quran also presented efforts to fix the laws of inheritance, and thus forming a complete legal system. This development was in contrast to pre-Islamic societies where rules of inheritance varied considerably. In addition to the above changes, the Quran imposed restrictions on testamentary powers of a Muslim in disposing his or her property. Three verses of the Quran, 4:11, 4:12 and 4:176, give specific details of inheritance and shares, in addition to few other verses dealing with testamentary. But this information was used as a starting point by Muslim jurists who expounded the laws of inheritance even further using Hadith, as well as methods of juristic reasoning like Qiyas. Nowadays, inheritance is considered an integral part of Sharia law and its application for Muslims is mandatory, though many peoples (see Historical inheritance systems), despite being Muslim, have other inheritance customs. The distribution of the inherited wealth has varied greatly among different cultures and legal traditions. In nations using civil law, for example, the right of children to inherit wealth from parents in pre-defined ratios is enshrined in law, as far back as the Code of Hammurabi (ca. 1750 BC). In the US State of Louisiana, the only US state where the legal system is derived from the Napoleonic Code, this system is known as "forced heirship" which prohibits disinheritance of adult children except for a few narrowly-defined reasons that a parent is obligated to prove. Other legal traditions, particularly in nations using common law, allow inheritances to be divided however one wishes, or to disinherit any child for any reason. In cases of unequal inheritance, the majority might receive little while only a small number inherit a larger amount. The amount of inheritance is often far less than the value of a business initially given to the son, especially when a son takes over a thriving multimillion-dollar business, yet the daughter is given the balance of the actual inheritance amounting to far less than the value of business that was initially given to the son. This is especially seen in old world cultures, but continues in many families to this day. Arguments for eliminating forced heirship include the right to property and the merit of individual allocation of capital over government wealth confiscation and redistribution, but this does not resolve what some describe as the problem of unequal inheritance. In terms of inheritance inequality, some economists and sociologists focus on the inter generational transmission of income or wealth which is said to have a direct impact on one's mobility (or immobility) and class position in society. Nations differ on the political structure and policy options that govern the transfer of wealth. According to the American federal government statistics compiled by Mark Zandi in 1985, the average US inheritance was $39,000. In subsequent years, the overall amount of total annual inheritance more than doubled, reaching nearly $200 billion. By 2050, there will be an estimated $25 trillion inheritance transmitted across generations. Some researchers have attributed this rise to the baby boomer generation. Historically, the baby boomers were the largest influx of children conceived after WW2. For this reason, Thomas Shapiro suggests that this generation "is in the midst of benefiting from the greatest inheritance of wealth in history". Inherited wealth may help explain why many Americans who have become rich may have had a "substantial head start". In September 2012, according to the Institute for Policy Studies, "over 60 percent" of the Forbes richest 400 Americans "grew up in substantial privilege", and often (but not always) received substantial inheritances. Other research has shown that many inheritances, large or small, are rapidly squandered. Similarly, analysis shows that over two-thirds of high-wealth families lose their wealth within two generations, and almost 80% of high-wealth parents "feel the next generation is not financially responsible enough to handle inheritance". It has been argued that inheritance plays a significant effect on social stratification. Inheritance is an integral component of family, economic, and legal institutions, and a basic mechanism of class stratification. It also affects the distribution of wealth at the societal level. The total cumulative effect of inheritance on stratification outcomes takes three forms, according to scholars who have examined the subject. The first form of inheritance is the inheritance of cultural capital (i.e. linguistic styles, higher status social circles, and aesthetic preferences). The second form of inheritance is through familial interventions in the form of inter vivos transfers (i.e. gifts between the living), especially at crucial junctures in the life courses. Examples include during a child's milestone stages, such as going to college, getting married, getting a job, and purchasing a home. The third form of inheritance is the transfers of bulk estates at the time of death of the testators, thus resulting in significant economic advantage accruing to children during their adult years. The origin of the stability of inequalities is material (personal possessions one is able to obtain) and is also cultural, rooted either in varying child-rearing practices that are geared to socialization according to social class and economic position. Child-rearing practices among those who inherit wealth may center around favoring some groups at the expense of others at the bottom of the social hierarchy. It is further argued that the degree to which economic status and inheritance is transmitted across generations determines one's life chances in society. Although many have linked one's social origins and educational attainment to life chances and opportunities, education cannot serve as the most influential predictor of economic mobility. In fact, children of well-off parents generally receive better schooling and benefit from material, cultural, and genetic inheritances. Likewise, schooling attainment is often persistent across generations and families with higher amounts of inheritance are able to acquire and transmit higher amounts of human capital. Lower amounts of human capital and inheritance can perpetuate inequality in the housing market and higher education. Research reveals that inheritance plays an important role in the accumulation of housing wealth. Those who receive an inheritance are more likely to own a home than those who do not regardless of the size of the inheritance. Often, racial or religious minorities and individuals from socially disadvantaged backgrounds receive less inheritance and wealth. As a result, mixed races might be excluded in inheritance privilege and are more likely to rent homes or live in poorer neighborhoods, as well as achieve lower educational attainment compared with whites in America. Individuals with a substantial amount of wealth and inheritance often intermarry with others of the same social class to protect their wealth and ensure the continuous transmission of inheritance across generations; thus perpetuating a cycle of privilege. Nations with the highest income and wealth inequalities often have the highest rates of homicide and disease (such as obesity, diabetes, and hypertension) which results in high mortality rates. A New York Times article reveals that the U.S. is the world's wealthiest nation, but "ranks twenty-ninth in life expectancy, right behind Jordan and Bosnia" and "has the second highest mortality rate of the comparable OECD countries". This has been regarded as highly attributed to the significant gap of inheritance inequality in the country, although there are clearly other factors such as the affordability of healthcare. When social and economic inequalities centered on inheritance are perpetuated by major social institutions such as family, education, religion, etc., these differing life opportunities are argued to be transmitted from each generation. As a result, this inequality is believed to become part of the overall social structure. Dynastic wealth is monetary inheritance that is passed on to generations that did not earn it. Dynastic wealth is linked to the term Plutocracy. Much has been written about the rise and influence of dynastic wealth including the bestselling book Capital in the Twenty-First Century by the French economist Thomas Piketty. Bill Gates uses the term in his article "Why Inequality Matters". As Communism is founded on the Marxist Labor Theory of Value, any money collected in the course of a lifetime is justified if it was based on the fruits of the person's own labor and not from exploiting others. The first communist government installed after the Russian Revolution resolved therefore to abolish the right of inheritance, with some exceptions. Many states have inheritance taxes or estate taxes, under which a portion of any inheritance or estate becomes government revenue.
[ { "paragraph_id": 0, "text": "Inheritance is the practice of receiving private property, titles, debts, entitlements, privileges, rights, and obligations upon the death of an individual. The rules of inheritance differ among societies and have changed over time. Officially bequeathing private property and/or debts can be performed by a testator via will, as attested by a notary or by other lawful means.", "title": "" }, { "paragraph_id": 1, "text": "In law, an \"heir\" (FEM: heiress) is a person who is entitled to receive a share of the deceased's (the person who died) property, subject to the rules of inheritance in the jurisdiction of which the deceased was a citizen or where the deceased (decedent) died or owned property at the time of death.", "title": "Terminology" }, { "paragraph_id": 2, "text": "The inheritance may be either under the terms of a will or by intestate laws if the deceased had no will. However, the will must comply with the laws of the jurisdiction at the time it was created or it will be declared invalid (for example, some states do not recognise handwritten wills as valid, or only in specific circumstances) and the intestate laws then apply.", "title": "Terminology" }, { "paragraph_id": 3, "text": "The exclusion from inheritance of a person who was an heir in a previous will, or would otherwise be expected to inherit, is termed \"disinheritance\".", "title": "Terminology" }, { "paragraph_id": 4, "text": "A person does not become an heir, since the exact identity of the persons entitled to inherit is determined only then. Members of ruling noble or royal houses who are expected to become heirs are called heirs apparent if first in line and incapable of being displaced from inheriting by another claim; otherwise, they are heirs presumptive. There is a further concept of joint inheritance, pending renunciation by all but one, which is called coparceny.", "title": "Terminology" }, { "paragraph_id": 5, "text": "In modern law, the terms inheritance and heir refer exclusively to succession to property by descent from a deceased dying intestate. Takers in property succeeded to under a will are termed generally beneficiaries, and specifically devises for real property, bequests for personal property (except money), or legatees for money.", "title": "Terminology" }, { "paragraph_id": 6, "text": "Except in some jurisdictions where a person cannot be legally disinherited (such as the United States state of Louisiana, which allows disinheritance only under specifically enumerated circumstances), a person who would be an heir under intestate laws may be disinherited completely under the terms of a will (an example is that of the will of comedian Jerry Lewis; his will specifically disinherited his six children by his first wife, and their descendants, leaving his entire estate to his second wife).", "title": "Terminology" }, { "paragraph_id": 7, "text": "Detailed anthropological and sociological studies have been made about customs of patrimonial inheritance, where only male children can inherit. Some cultures also employ matrilineal succession, where property can only pass along the female line, most commonly going to the sister's sons of the decedent; but also, in some societies, from the mother to her daughters. Some ancient societies and most modern states employ egalitarian inheritance, without discrimination based on gender and/or birth order.", "title": "History" }, { "paragraph_id": 8, "text": "The inheritance is patrimonial. The father —that is, the owner of the land— bequeaths only to his male descendants, so the Promised Land passes from one Jewish father to his sons. According to the Law of Moses, the firstborn son was entitled to receive twice as much of his father's inheritance as the other sons (Deuteronomy 21:15–17).", "title": "Religious laws about inheritance" }, { "paragraph_id": 9, "text": "If there were no living sons and no descendants of any previously living sons, daughters inherit. In Numbers 27, the five daughters of Zelophehad come to Moses and ask for their father's inheritance, as they have no brothers. The order of inheritance is set out: a man's sons inherit first, daughters if no sons, brothers if he has no children, and so on.", "title": "Religious laws about inheritance" }, { "paragraph_id": 10, "text": "Later, in Numbers 36, some of the heads of the families of the tribe of Manasseh come to Moses and point out that, if a daughter inherits and then marries a man not from her paternal tribe, her land will pass from her birth-tribe's inheritance into her marriage-tribe's. So a further rule is laid down: if a daughter inherits land, she must marry someone within her father's tribe. (The daughters of Zelophehad marry the sons' of their father's brothers. There is no indication that this was not their choice.)", "title": "Religious laws about inheritance" }, { "paragraph_id": 11, "text": "The laws of Jewish inheritance are discussed in the Talmud, in the Mishneh Torah and by Saadiah ben Joseph among other sources. All these sources agree that the firstborn son is entitled to a double portion of his father's estate. This means that, for example, if a father left five sons, the firstborn receives a third of the estate and each of the other four receives a sixth. If he left nine sons, the firstborn receives a fifth and each of the other eight receive a tenth. If the eldest surviving son is not the firstborn son, he is not entitled to the double portion.", "title": "Religious laws about inheritance" }, { "paragraph_id": 12, "text": "Philo of Alexandria and Josephus also comment on the Jewish laws of inheritance, praising them above other law codes of their time. They also agreed that the firstborn son must receive a double portion of his father's estate.", "title": "Religious laws about inheritance" }, { "paragraph_id": 13, "text": "At first, Christianity did not have its own inheritance traditions distinct from Judaism. With the accession of Emperor Constantine in 306, Christians both began to distance themselves from Judaism and to have influence on the law and practices of secular institutions. From the beginning, this included inheritance. The Roman practice of adoption was a specific target, because it was perceived to be in conflict with the Judeo-Christian doctrine of primogeniture. As Stephanie Coontz documents in Marriage, a History (Penguin, 2006), not only succession but the whole constellation of rights and practices that included marriage, adoption, legitimacy, consanguinity, and inheritance changed in Western Europe from a Greco-Roman model to a Judeo-Christian pattern, based on Biblical and traditional Judeo-Christian principles. The transformation was essentially complete in the Middle Ages, although in English-speaking countries there was additional development under the influence of Protestantism. Even when Europe became secularized and Christianity faded into the background, the legal foundation Christendom had laid remained. Only in the era of modern jurisprudence have there been significant changes.", "title": "Religious laws about inheritance" }, { "paragraph_id": 14, "text": "The Quran introduced a number of different rights and restrictions on matters of inheritance, including general improvements to the treatment of women and family life compared to the pre-Islamic societies that existed in the Arabian Peninsula at the time. Furthermore, the Quran introduced additional heirs that were not entitled to inheritance in pre-Islamic times, mentioning nine relatives specifically of which six were female and three were male. However, the inheritance rights of women remained inferior to those of men because in Islam someone always has a responsibility of looking after a woman's expenses. According to Quran 4:11, for example, a son is entitled to twice as much inheritance as a daughter. The Quran also presented efforts to fix the laws of inheritance, and thus forming a complete legal system. This development was in contrast to pre-Islamic societies where rules of inheritance varied considerably. In addition to the above changes, the Quran imposed restrictions on testamentary powers of a Muslim in disposing his or her property. Three verses of the Quran, 4:11, 4:12 and 4:176, give specific details of inheritance and shares, in addition to few other verses dealing with testamentary. But this information was used as a starting point by Muslim jurists who expounded the laws of inheritance even further using Hadith, as well as methods of juristic reasoning like Qiyas. Nowadays, inheritance is considered an integral part of Sharia law and its application for Muslims is mandatory, though many peoples (see Historical inheritance systems), despite being Muslim, have other inheritance customs.", "title": "Religious laws about inheritance" }, { "paragraph_id": 15, "text": "The distribution of the inherited wealth has varied greatly among different cultures and legal traditions. In nations using civil law, for example, the right of children to inherit wealth from parents in pre-defined ratios is enshrined in law, as far back as the Code of Hammurabi (ca. 1750 BC). In the US State of Louisiana, the only US state where the legal system is derived from the Napoleonic Code, this system is known as \"forced heirship\" which prohibits disinheritance of adult children except for a few narrowly-defined reasons that a parent is obligated to prove. Other legal traditions, particularly in nations using common law, allow inheritances to be divided however one wishes, or to disinherit any child for any reason.", "title": "Inequality" }, { "paragraph_id": 16, "text": "In cases of unequal inheritance, the majority might receive little while only a small number inherit a larger amount. The amount of inheritance is often far less than the value of a business initially given to the son, especially when a son takes over a thriving multimillion-dollar business, yet the daughter is given the balance of the actual inheritance amounting to far less than the value of business that was initially given to the son. This is especially seen in old world cultures, but continues in many families to this day.", "title": "Inequality" }, { "paragraph_id": 17, "text": "Arguments for eliminating forced heirship include the right to property and the merit of individual allocation of capital over government wealth confiscation and redistribution, but this does not resolve what some describe as the problem of unequal inheritance. In terms of inheritance inequality, some economists and sociologists focus on the inter generational transmission of income or wealth which is said to have a direct impact on one's mobility (or immobility) and class position in society. Nations differ on the political structure and policy options that govern the transfer of wealth.", "title": "Inequality" }, { "paragraph_id": 18, "text": "According to the American federal government statistics compiled by Mark Zandi in 1985, the average US inheritance was $39,000. In subsequent years, the overall amount of total annual inheritance more than doubled, reaching nearly $200 billion. By 2050, there will be an estimated $25 trillion inheritance transmitted across generations.", "title": "Inequality" }, { "paragraph_id": 19, "text": "Some researchers have attributed this rise to the baby boomer generation. Historically, the baby boomers were the largest influx of children conceived after WW2. For this reason, Thomas Shapiro suggests that this generation \"is in the midst of benefiting from the greatest inheritance of wealth in history\". Inherited wealth may help explain why many Americans who have become rich may have had a \"substantial head start\". In September 2012, according to the Institute for Policy Studies, \"over 60 percent\" of the Forbes richest 400 Americans \"grew up in substantial privilege\", and often (but not always) received substantial inheritances.", "title": "Inequality" }, { "paragraph_id": 20, "text": "Other research has shown that many inheritances, large or small, are rapidly squandered. Similarly, analysis shows that over two-thirds of high-wealth families lose their wealth within two generations, and almost 80% of high-wealth parents \"feel the next generation is not financially responsible enough to handle inheritance\".", "title": "Inequality" }, { "paragraph_id": 21, "text": "It has been argued that inheritance plays a significant effect on social stratification. Inheritance is an integral component of family, economic, and legal institutions, and a basic mechanism of class stratification. It also affects the distribution of wealth at the societal level. The total cumulative effect of inheritance on stratification outcomes takes three forms, according to scholars who have examined the subject.", "title": "Inequality" }, { "paragraph_id": 22, "text": "The first form of inheritance is the inheritance of cultural capital (i.e. linguistic styles, higher status social circles, and aesthetic preferences). The second form of inheritance is through familial interventions in the form of inter vivos transfers (i.e. gifts between the living), especially at crucial junctures in the life courses. Examples include during a child's milestone stages, such as going to college, getting married, getting a job, and purchasing a home. The third form of inheritance is the transfers of bulk estates at the time of death of the testators, thus resulting in significant economic advantage accruing to children during their adult years. The origin of the stability of inequalities is material (personal possessions one is able to obtain) and is also cultural, rooted either in varying child-rearing practices that are geared to socialization according to social class and economic position. Child-rearing practices among those who inherit wealth may center around favoring some groups at the expense of others at the bottom of the social hierarchy.", "title": "Inequality" }, { "paragraph_id": 23, "text": "It is further argued that the degree to which economic status and inheritance is transmitted across generations determines one's life chances in society. Although many have linked one's social origins and educational attainment to life chances and opportunities, education cannot serve as the most influential predictor of economic mobility. In fact, children of well-off parents generally receive better schooling and benefit from material, cultural, and genetic inheritances. Likewise, schooling attainment is often persistent across generations and families with higher amounts of inheritance are able to acquire and transmit higher amounts of human capital. Lower amounts of human capital and inheritance can perpetuate inequality in the housing market and higher education. Research reveals that inheritance plays an important role in the accumulation of housing wealth. Those who receive an inheritance are more likely to own a home than those who do not regardless of the size of the inheritance.", "title": "Inequality" }, { "paragraph_id": 24, "text": "Often, racial or religious minorities and individuals from socially disadvantaged backgrounds receive less inheritance and wealth. As a result, mixed races might be excluded in inheritance privilege and are more likely to rent homes or live in poorer neighborhoods, as well as achieve lower educational attainment compared with whites in America. Individuals with a substantial amount of wealth and inheritance often intermarry with others of the same social class to protect their wealth and ensure the continuous transmission of inheritance across generations; thus perpetuating a cycle of privilege.", "title": "Inequality" }, { "paragraph_id": 25, "text": "Nations with the highest income and wealth inequalities often have the highest rates of homicide and disease (such as obesity, diabetes, and hypertension) which results in high mortality rates. A New York Times article reveals that the U.S. is the world's wealthiest nation, but \"ranks twenty-ninth in life expectancy, right behind Jordan and Bosnia\" and \"has the second highest mortality rate of the comparable OECD countries\". This has been regarded as highly attributed to the significant gap of inheritance inequality in the country, although there are clearly other factors such as the affordability of healthcare.", "title": "Inequality" }, { "paragraph_id": 26, "text": "When social and economic inequalities centered on inheritance are perpetuated by major social institutions such as family, education, religion, etc., these differing life opportunities are argued to be transmitted from each generation. As a result, this inequality is believed to become part of the overall social structure.", "title": "Inequality" }, { "paragraph_id": 27, "text": "Dynastic wealth is monetary inheritance that is passed on to generations that did not earn it. Dynastic wealth is linked to the term Plutocracy. Much has been written about the rise and influence of dynastic wealth including the bestselling book Capital in the Twenty-First Century by the French economist Thomas Piketty.", "title": "Inequality" }, { "paragraph_id": 28, "text": "Bill Gates uses the term in his article \"Why Inequality Matters\".", "title": "Inequality" }, { "paragraph_id": 29, "text": "As Communism is founded on the Marxist Labor Theory of Value, any money collected in the course of a lifetime is justified if it was based on the fruits of the person's own labor and not from exploiting others. The first communist government installed after the Russian Revolution resolved therefore to abolish the right of inheritance, with some exceptions.", "title": "Inequality" }, { "paragraph_id": 30, "text": "Many states have inheritance taxes or estate taxes, under which a portion of any inheritance or estate becomes government revenue.", "title": "Taxation" } ]
Inheritance is the practice of receiving private property, titles, debts, entitlements, privileges, rights, and obligations upon the death of an individual. The rules of inheritance differ among societies and have changed over time. Officially bequeathing private property and/or debts can be performed by a testator via will, as attested by a notary or by other lawful means.
2002-02-25T15:43:11Z
2023-12-27T01:45:43Z
[ "Template:Cite EB1911", "Template:Abbr", "Template:Main", "Template:Citation needed", "Template:Who", "Template:Reflist", "Template:Cite encyclopedia", "Template:Webarchive", "Template:Wealth", "Template:About", "Template:Redirect", "Template:Further", "Template:Bibleverse", "Template:Cite web", "Template:Cite news", "Template:In lang", "Template:Qref", "Template:Family", "Template:Property navbox", "Template:Short description", "Template:Wikiquote", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Inheritance
15,435
Ignatius of Antioch
Ignatius of Antioch (/ɪɡˈneɪʃəs/; Greek: Ἰγνάτιος Ἀντιοχείας, translit. Ignátios Antiokheías; died c. 108/140 AD), also known as Ignatius Theophorus (Ἰγνάτιος ὁ Θεοφόρος, Ignátios ho Theophóros, 'the God-bearing'), was an early Christian writer and Patriarch of Antioch. While en route to Rome, where he met his martyrdom, Ignatius wrote a series of letters. This correspondence forms a central part of a later collection of works by the Apostolic Fathers. He is considered one of the three most important of these, together with Clement of Rome and Polycarp. His letters also serve as an example of early Christian theology, and address important topics including ecclesiology, the sacraments, and the role of bishops. Nothing is known of Ignatius' life apart from the words of his letters, except from dubious later traditions. It is said Ignatius converted to Christianity at a young age. Tradition identifies him and his friend Polycarp as disciples of John the Apostle. Later, Ignatius was chosen to serve as Bishop of Antioch; the fourth-century Church historian Eusebius writes that Ignatius succeeded Evodius. Theodoret of Cyrrhus claimed that St. Peter himself left directions that Ignatius be appointed to this episcopal see. Ignatius called himself Theophorus (God Bearer). A tradition arose that he was one of the children whom Jesus Christ took in his arms and blessed. Ignatius' feast day was kept in his own Antioch on 17 October, the day on which he is now celebrated in the Catholic Church and generally in western Christianity, although from the 12th century until 1969 it was put at 1 February in the General Roman Calendar. In the Eastern Orthodox Church it is observed on 20 December. The Synaxarium of the Coptic Orthodox Church of Alexandria places it on the 24th of the Coptic Month of Koiak (which is also the 24th day of the fourth month of Tahisas in the Synaxarium of The Ethiopian Orthodox Tewahedo Church), corresponding in three years out of every four to 20 December in the Julian Calendar, which currently falls on 2 January of the Gregorian Calendar. Ignatius is honored in the Church of England and in the Episcopal Church on 17 October. Ignatius was condemned to death for his faith, but instead of being executed in his home town of Antioch, the bishop was taken to Rome by a company of ten soldiers: From Syria even unto Rome I fight with beasts, both by land and sea, both by night and day, being bound to ten leopards, I mean a band of soldiers... Scholars consider Ignatius' transport to Rome unusual, since those persecuted as Christians would be expected to be punished locally. Stevan Davies has pointed out that "no other examples exist from the Flavian age of any prisoners except citizens or prisoners of war being brought to Rome for execution." If Ignatius had been a Roman citizen, he could have appealed to the emperor, with the common result of execution by beheading rather than torture. However, Ignatius's letters state that he was put in chains during the journey, but it was against Roman law for a citizen to be put in bonds during an appeal to the emperor. Allen Brent argues that Ignatius was transferred to Rome for the emperor to provide a spectacle as a victim in the Colosseum. Brent insists, contrary to some, that "it was normal practice to transport condemned criminals from the provinces in order to offer spectator sport in the Colosseum at Rome." Stevan Davies rejects this idea, reasoning that: "if Ignatius was in some way a donation by the Imperial Governor of Syria to the games at Rome, a single prisoner seems a rather miserly gift." Instead, Davies proposes that Ignatius may have been indicted by a legate, or representative, of the governor of Syria while the governor was away temporarily, and sent to Rome for trial and execution. Under Roman law, only the governor of a province or the emperor himself could impose capital punishment, so the legate would have faced the choice of imprisoning Ignatius in Antioch or sending him to Rome. Transporting the bishop might have avoided further agitation by the Antiochene Christians. Christine Trevett calls Davies' suggestion "entirely hypothetical" and concludes that no fully satisfactory solution to the problem can be found: "I tend to take the bishop at his word when he says he is a condemned man. But the question remains, why is he going to Rome? The truth is that we do not know." During the journey to Rome, Ignatius and his entourage of soldiers made a number of lengthy stops in Asia Minor, deviating from the most direct land route from Antioch to Rome. Scholars generally agree on the following reconstruction of Ignatius' route of travel: During the journey, the soldiers seem to have allowed the chained Ignatius to meet with entire congregations of Christians, at least at Philadelphia (cf. Ign. Phil. 7), and numerous Christian visitors and messengers were allowed to meet with him individually. These messengers allowed Ignatius to send six letters to nearby churches, and one to Polycarp, the bishop of Smyrna. These aspects of Ignatius' martyrdom are also unusual, in that a prisoner would normally be transported on the most direct route to his destination. Since travel by land in the Roman Empire was far more expensive than by sea, especially since Antioch was a major sea port. Davies argues that Ignatius' circuitous route can only be explained by positing that he was not the main purpose of the soldiers' trip, and that the various stops in Asia Minor were for other state business. He suggests that such a scenario would also explain the relative freedom that Ignatius was given to meet with other Christians during the journey. Due to the sparse documentation, the date of Ignatius's death is uncertain. Tradition places his martyrdom in the reign of Trajan (emperor from 98-117 AD). The earliest source for this is the 4th century church historian Eusebius of Caesarea, who is regarded by some modern scholars as unreliable for chronological information on the early church. Eusebius may have had an ideological interest in dating church leaders as early as possible, and asserting a continuous succession between the original apostles of Jesus and the leaders of the church in his day. While many scholars accept this traditional dating, others have argued for a somewhat later date. Richard Pervo dated Ignatius' death to 135–140 AD. British classicist Timothy Barnes has argued for a date in the 140s AD, on the grounds that Ignatius seems to have quoted a work of the Gnostic Ptolemy, who only became active in the 130s. Étienne Decrept has argued from the testimony of John Malalas and the Acts of Drosis that Ignatius was martyred under the reign of Trajan during Apollo's festival in July 116 AD, and in response to the earthquake at Antioch in late 115 AD. Ignatius wrote that he would be thrown to the beasts, and in the fourth century Eusebius reports a tradition confirming this, while the account of Jerome is the first to explicitly mention "lions." John Chrysostom is the first to place of Ignatius' martyrdom at the Colosseum. Modern scholars are uncertain whether any of these authors had sources other than Ignatius' own writings. According to a medieval Christian text titled Martyrium Ignatii, Ignatius' remains were carried back to Antioch by his companions after his martyrdom. The sixth-century writings of Evagrius Scholasticus state that the reputed remains of Ignatius were moved by the Emperor Theodosius II to the Tychaeum, or Temple of Tyche, and converted it into a church dedicated to Ignatius. In 637, when Antioch was captured by the Rashidun Caliphate, the relics were transferred to the Basilica di San Clemente in Rome. The Martyrium Ignatii is account of the saint's martyrdom. It is presented as an eye-witness account for the church of Antioch, attributed to Ignatius' companions, Philo of Cilicia, deacon at Tarsus, and Rheus Agathopus, a Syrian. Its most reliable manuscript is the 10th-century collection Codex Colbertinus (Paris), in which it is the final item. The Martyrium presents the confrontation of Bishop Ignatius with Emperor Trajan at Antioch, a familiar trope of Acta of the martyrs, and many details of the long journey to Rome. The Synaxarium of the Coptic Orthodox Church of Alexandria says that he was thrown to the wild beasts that devoured him. The following seven epistles preserved under the name of Ignatius are generally considered authentic, since they were mentioned by the historian Eusebius in the first half of the fourth century. Seven original epistles: The text of these epistles is known in three different recensions, or editions: the Short Recension, found in a Syriac manuscript; the Middle Recension, found only in Greek manuscripts; and the Long Recension, found in Greek and Latin manuscripts. For some time, it was believed that the Long Recension was the only extant version of the Ignatian epistles, but around 1628 a Latin translation of the Middle Recension was discovered by Archbishop James Ussher, who published it in 1646. For around a quarter of a century after this, it was debated which recension represented the original text of the epistles. But ever since John Pearson's strong defense of the authenticity of the Middle Recension in the late 17th century, there has been a scholarly consensus that the Middle Recension is the original version of the text. The Long Recension is the product of a fourth-century Arian Christian, who interpolated the Middle Recension epistles in order posthumously to enlist Ignatius as an unwitting witness in theological disputes of that age. This individual also forged the six spurious epistles attributed to Ignatius (see § Pseudo-Ignatius below). Manuscripts representing the Short Recension of the Ignatian epistles were discovered and published by William Cureton in the mid-19th century. For a brief period, there was a scholarly debate on the question of whether the Short Recension was earlier and more original than the Middle Recension. But by the end of the 19th century, Theodor Zahn and J. B. Lightfoot had established a scholarly consensus that the Short Recension is merely a summary of the text of the Middle Recension, and was therefore composed later. Though the Catholic Church has always supported the authenticity of the letters, some Protestants have tended to deny the authenticity of all the epistles because they seem to attest to a monarchical episcopate in the second century. John Calvin called the epistles "rubbish published under Ignatius' name." In 1886, Presbyterian minister and church historian William Dool Killen published a long essay attacking the authenticity of the epistles attributed to Ignatius. He argued that Callixtus, bishop of Rome, forged the letters around AD 220 to garner support for a monarchical episcopate, modeling the renowned Saint Ignatius after his own life to give precedent for his own authority. Killen contrasted this episcopal polity with the presbyterian polity in the writings of Polycarp. Some doubts about the letters' authenticity continued into the 20th century. In the 1970s and 1980s, the scholars Robert Joly, Reinhard Hübner, Markus Vinzent, and Thomas Lechner argued forcefully that the epistles of the Middle Recension were forgeries from the reign of Marcus Aurelius (161–180 AD). Joseph Ruis-Camps published a study arguing that the Middle Recension letters were pseudepigraphically composed based on an original, smaller, authentic corpus of four letters (Romans, Magnesians, Trallians, and Ephesians). In 2009, Otto Zwierlein support the thesis of a forgery written around 170 AD. These publications stirred up heated scholarly controversy, but by 2017, most patristic scholars accepted the authenticity of the seven original epistles. However, J. Lookadoo said in 2020 that "the debate has received renewed energy since the late 1990s and shows few signs of slowing." The original texts of six of the seven original letters are found in the Codex Mediceo Laurentianus, written in Greek in the 11th century (which also contains the pseudepigraphical letters of the Long Recension, except that to the Philippians), while the letter to the Romans is found in the Codex Colbertinus. Ignatius's letters bear signs of being written in great haste, such as run-on sentences and an unsystematic succession of thought. Ignatius modelled them after the biblical epistles of Paul, Peter, and John, quoting or paraphrasing these apostles' works freely. For example, in his letter to the Ephesians he quoted 1 Corinthians 1:18: Let my spirit be counted as nothing for the sake of the cross, which is a stumbling-block to those that do not believe, but to us salvation and life eternal. Ignatius is known to have taught the deity of Christ: There is one Physician who is possessed both of flesh and spirit; both made and not made; God existing in flesh; true life in death; both of Mary and of God; first passible and then impassible, even Jesus Christ our Lord. The same section in text of the Long Recension says the following: But our Physician is the Only true God, the unbegotten and unapproachable, the Lord of all, the Father and Begetter of the only-begotten Son. We have also as a Physician the Lord our God, Jesus the Christ, the only-begotten Son and Word, before time began, but who afterwards became also man, of Mary the virgin. For "the Word was made flesh." Being incorporeal, He was in the body, being impassible, He was in a passible body, being immortal, He was in a mortal body, being life, He became subject to corruption, that He might free our souls from death and corruption, and heal them, and might restore them to health, when they were diseased with ungodliness and wicked lusts. He stressed the value of the Eucharist, calling it a "medicine of immortality" (Ignatius to the Ephesians 20:2). The very strong desire for bloody martyrdom in the arena, which Ignatius expresses rather graphically in places, may seem quite odd to the modern reader. An examination of his theology of soteriology shows that he regarded salvation as one being free from the powerful fear of death and thus to face martyrdom bravely. Ignatius is claimed to be the first known Christian writer to argue in favor of Christianity's replacement of the Sabbath with the Lord's Day: Be not seduced by strange doctrines nor by antiquated fables, which are profitless. For if even unto this day we live after the manner of Judaism, we avow that we have not received grace. ...If then those who had walked in ancient practices attained unto newness of hope, no longer observing Sabbaths but fashioning their lives after the Lord's day, on which our life also arose through Him ... how shall we be able to live apart from Him? If, therefore, those who were brought up in the ancient order of things have come to the possession of a new hope, no longer observing the Sabbath, but living in the observance of the Lord's day, on which also our life has sprung up again by Him and by His death—whom some deny, by which mystery we have obtained faith, and therefore endure, that we may be found the disciples of Jesus Christ, our only Master—how shall we be able to live apart from Him, whose disciples the prophets themselves in the Spirit did wait for Him as their Teacher? And therefore He whom they rightly waited for, being come, raised them from the dead. This passage has provoked textual debate since the only Greek manuscript extant read Κατα κυριακήν ζωήν ζωντες which could be translated "living according to the Lord's life." Most scholars, however, have followed the Latin text (secundum dominicam) omitting ζωήν and translating "living according to Lord's Day". Ignatius is the earliest known Christian writer to emphasize loyalty to a single bishop in each city (or diocese) who is assisted by both presbyters (elders) and deacons. Earlier writings only mention either bishops or presbyters. For instance, his writings on bishops, presbyters and deacons: Take care to do all things in harmony with God, with the bishop presiding in the place of God, and with the presbyters in the place of the council of the apostles, and with the deacons, who are most dear to me, entrusted with the business of Jesus Christ, who was with the Father from the beginning and is at last made manifest. He is also responsible for the first known use of the Greek word katholikos (καθολικός), or catholic, meaning "universal", "complete", "general", and/or "whole" to describe the Church, writing: Wherever the bishop appears, there let the people be; as wherever Jesus Christ is, there is the Catholic Church. It is not lawful to baptize or give communion without the consent of the bishop. On the other hand, whatever has his approval is pleasing to God. Thus, whatever is done will be safe and valid. Anglican bishop and theologian Joseph Lightfoot states the word "catholic (καθόλου)" simply means "universal" (cf "Roman Catholic" in the anachronistic modern sense of the particular religion), having a wide range of applications in the English language (thus requiring context to properly translate this word each time it is used, rather than to merely leave it transliterated), and can be found not only before and after Ignatius amongst ecclesiastical and classical writers, but centuries before the Christian era. Ignatius of Antioch is also attributed the earliest recorded use of the term "Christianity" (Greek: Χριστιανισμός) c. 100 AD. Several scholars have noted that there are striking similarities between Ignatius and the Christian-turned-Cynic philosopher Peregrinus Proteus, who is satirized by Lucian in The Passing of Peregrinus: It is generally believed that these parallels are the result of Lucian intentionally copying traits from Ignatius and applying them to his satire of Peregrinus. If the dependence of Lucian on the Ignatian epistles is accepted, then this places an upper limit on the date of the epistles during the 160s AD, just before The Passing of Peregrinus was written. In 1892, Daniel Völter sought to explain the parallels by proposing that the Ignatian epistles were in fact written by Peregrinus, and later attributed to the saint, but this speculative theory has failed to make a significant impact on the academic community. Epistles attributed to Saint Ignatius, but of spurious origin (their author is often called Pseudo-Ignatius in English) include:
[ { "paragraph_id": 0, "text": "Ignatius of Antioch (/ɪɡˈneɪʃəs/; Greek: Ἰγνάτιος Ἀντιοχείας, translit. Ignátios Antiokheías; died c. 108/140 AD), also known as Ignatius Theophorus (Ἰγνάτιος ὁ Θεοφόρος, Ignátios ho Theophóros, 'the God-bearing'), was an early Christian writer and Patriarch of Antioch. While en route to Rome, where he met his martyrdom, Ignatius wrote a series of letters. This correspondence forms a central part of a later collection of works by the Apostolic Fathers. He is considered one of the three most important of these, together with Clement of Rome and Polycarp. His letters also serve as an example of early Christian theology, and address important topics including ecclesiology, the sacraments, and the role of bishops.", "title": "" }, { "paragraph_id": 1, "text": "Nothing is known of Ignatius' life apart from the words of his letters, except from dubious later traditions. It is said Ignatius converted to Christianity at a young age. Tradition identifies him and his friend Polycarp as disciples of John the Apostle. Later, Ignatius was chosen to serve as Bishop of Antioch; the fourth-century Church historian Eusebius writes that Ignatius succeeded Evodius. Theodoret of Cyrrhus claimed that St. Peter himself left directions that Ignatius be appointed to this episcopal see. Ignatius called himself Theophorus (God Bearer). A tradition arose that he was one of the children whom Jesus Christ took in his arms and blessed.", "title": "Life" }, { "paragraph_id": 2, "text": "Ignatius' feast day was kept in his own Antioch on 17 October, the day on which he is now celebrated in the Catholic Church and generally in western Christianity, although from the 12th century until 1969 it was put at 1 February in the General Roman Calendar.", "title": "Veneration" }, { "paragraph_id": 3, "text": "In the Eastern Orthodox Church it is observed on 20 December. The Synaxarium of the Coptic Orthodox Church of Alexandria places it on the 24th of the Coptic Month of Koiak (which is also the 24th day of the fourth month of Tahisas in the Synaxarium of The Ethiopian Orthodox Tewahedo Church), corresponding in three years out of every four to 20 December in the Julian Calendar, which currently falls on 2 January of the Gregorian Calendar.", "title": "Veneration" }, { "paragraph_id": 4, "text": "Ignatius is honored in the Church of England and in the Episcopal Church on 17 October.", "title": "Veneration" }, { "paragraph_id": 5, "text": "Ignatius was condemned to death for his faith, but instead of being executed in his home town of Antioch, the bishop was taken to Rome by a company of ten soldiers:", "title": "Martyrdom" }, { "paragraph_id": 6, "text": "From Syria even unto Rome I fight with beasts, both by land and sea, both by night and day, being bound to ten leopards, I mean a band of soldiers...", "title": "Martyrdom" }, { "paragraph_id": 7, "text": "Scholars consider Ignatius' transport to Rome unusual, since those persecuted as Christians would be expected to be punished locally. Stevan Davies has pointed out that \"no other examples exist from the Flavian age of any prisoners except citizens or prisoners of war being brought to Rome for execution.\"", "title": "Martyrdom" }, { "paragraph_id": 8, "text": "If Ignatius had been a Roman citizen, he could have appealed to the emperor, with the common result of execution by beheading rather than torture. However, Ignatius's letters state that he was put in chains during the journey, but it was against Roman law for a citizen to be put in bonds during an appeal to the emperor.", "title": "Martyrdom" }, { "paragraph_id": 9, "text": "Allen Brent argues that Ignatius was transferred to Rome for the emperor to provide a spectacle as a victim in the Colosseum. Brent insists, contrary to some, that \"it was normal practice to transport condemned criminals from the provinces in order to offer spectator sport in the Colosseum at Rome.\"", "title": "Martyrdom" }, { "paragraph_id": 10, "text": "Stevan Davies rejects this idea, reasoning that: \"if Ignatius was in some way a donation by the Imperial Governor of Syria to the games at Rome, a single prisoner seems a rather miserly gift.\" Instead, Davies proposes that Ignatius may have been indicted by a legate, or representative, of the governor of Syria while the governor was away temporarily, and sent to Rome for trial and execution. Under Roman law, only the governor of a province or the emperor himself could impose capital punishment, so the legate would have faced the choice of imprisoning Ignatius in Antioch or sending him to Rome. Transporting the bishop might have avoided further agitation by the Antiochene Christians.", "title": "Martyrdom" }, { "paragraph_id": 11, "text": "Christine Trevett calls Davies' suggestion \"entirely hypothetical\" and concludes that no fully satisfactory solution to the problem can be found: \"I tend to take the bishop at his word when he says he is a condemned man. But the question remains, why is he going to Rome? The truth is that we do not know.\"", "title": "Martyrdom" }, { "paragraph_id": 12, "text": "During the journey to Rome, Ignatius and his entourage of soldiers made a number of lengthy stops in Asia Minor, deviating from the most direct land route from Antioch to Rome. Scholars generally agree on the following reconstruction of Ignatius' route of travel:", "title": "Martyrdom" }, { "paragraph_id": 13, "text": "During the journey, the soldiers seem to have allowed the chained Ignatius to meet with entire congregations of Christians, at least at Philadelphia (cf. Ign. Phil. 7), and numerous Christian visitors and messengers were allowed to meet with him individually. These messengers allowed Ignatius to send six letters to nearby churches, and one to Polycarp, the bishop of Smyrna.", "title": "Martyrdom" }, { "paragraph_id": 14, "text": "These aspects of Ignatius' martyrdom are also unusual, in that a prisoner would normally be transported on the most direct route to his destination. Since travel by land in the Roman Empire was far more expensive than by sea, especially since Antioch was a major sea port. Davies argues that Ignatius' circuitous route can only be explained by positing that he was not the main purpose of the soldiers' trip, and that the various stops in Asia Minor were for other state business. He suggests that such a scenario would also explain the relative freedom that Ignatius was given to meet with other Christians during the journey.", "title": "Martyrdom" }, { "paragraph_id": 15, "text": "Due to the sparse documentation, the date of Ignatius's death is uncertain. Tradition places his martyrdom in the reign of Trajan (emperor from 98-117 AD). The earliest source for this is the 4th century church historian Eusebius of Caesarea, who is regarded by some modern scholars as unreliable for chronological information on the early church. Eusebius may have had an ideological interest in dating church leaders as early as possible, and asserting a continuous succession between the original apostles of Jesus and the leaders of the church in his day.", "title": "Martyrdom" }, { "paragraph_id": 16, "text": "While many scholars accept this traditional dating, others have argued for a somewhat later date. Richard Pervo dated Ignatius' death to 135–140 AD. British classicist Timothy Barnes has argued for a date in the 140s AD, on the grounds that Ignatius seems to have quoted a work of the Gnostic Ptolemy, who only became active in the 130s. Étienne Decrept has argued from the testimony of John Malalas and the Acts of Drosis that Ignatius was martyred under the reign of Trajan during Apollo's festival in July 116 AD, and in response to the earthquake at Antioch in late 115 AD.", "title": "Martyrdom" }, { "paragraph_id": 17, "text": "Ignatius wrote that he would be thrown to the beasts, and in the fourth century Eusebius reports a tradition confirming this, while the account of Jerome is the first to explicitly mention \"lions.\" John Chrysostom is the first to place of Ignatius' martyrdom at the Colosseum. Modern scholars are uncertain whether any of these authors had sources other than Ignatius' own writings.", "title": "Martyrdom" }, { "paragraph_id": 18, "text": "According to a medieval Christian text titled Martyrium Ignatii, Ignatius' remains were carried back to Antioch by his companions after his martyrdom. The sixth-century writings of Evagrius Scholasticus state that the reputed remains of Ignatius were moved by the Emperor Theodosius II to the Tychaeum, or Temple of Tyche, and converted it into a church dedicated to Ignatius. In 637, when Antioch was captured by the Rashidun Caliphate, the relics were transferred to the Basilica di San Clemente in Rome.", "title": "Martyrdom" }, { "paragraph_id": 19, "text": "The Martyrium Ignatii is account of the saint's martyrdom. It is presented as an eye-witness account for the church of Antioch, attributed to Ignatius' companions, Philo of Cilicia, deacon at Tarsus, and Rheus Agathopus, a Syrian.", "title": "Martyrdom" }, { "paragraph_id": 20, "text": "Its most reliable manuscript is the 10th-century collection Codex Colbertinus (Paris), in which it is the final item. The Martyrium presents the confrontation of Bishop Ignatius with Emperor Trajan at Antioch, a familiar trope of Acta of the martyrs, and many details of the long journey to Rome. The Synaxarium of the Coptic Orthodox Church of Alexandria says that he was thrown to the wild beasts that devoured him.", "title": "Martyrdom" }, { "paragraph_id": 21, "text": "The following seven epistles preserved under the name of Ignatius are generally considered authentic, since they were mentioned by the historian Eusebius in the first half of the fourth century.", "title": "Epistles" }, { "paragraph_id": 22, "text": "Seven original epistles:", "title": "Epistles" }, { "paragraph_id": 23, "text": "The text of these epistles is known in three different recensions, or editions: the Short Recension, found in a Syriac manuscript; the Middle Recension, found only in Greek manuscripts; and the Long Recension, found in Greek and Latin manuscripts.", "title": "Epistles" }, { "paragraph_id": 24, "text": "For some time, it was believed that the Long Recension was the only extant version of the Ignatian epistles, but around 1628 a Latin translation of the Middle Recension was discovered by Archbishop James Ussher, who published it in 1646. For around a quarter of a century after this, it was debated which recension represented the original text of the epistles. But ever since John Pearson's strong defense of the authenticity of the Middle Recension in the late 17th century, there has been a scholarly consensus that the Middle Recension is the original version of the text. The Long Recension is the product of a fourth-century Arian Christian, who interpolated the Middle Recension epistles in order posthumously to enlist Ignatius as an unwitting witness in theological disputes of that age. This individual also forged the six spurious epistles attributed to Ignatius (see § Pseudo-Ignatius below).", "title": "Epistles" }, { "paragraph_id": 25, "text": "Manuscripts representing the Short Recension of the Ignatian epistles were discovered and published by William Cureton in the mid-19th century. For a brief period, there was a scholarly debate on the question of whether the Short Recension was earlier and more original than the Middle Recension. But by the end of the 19th century, Theodor Zahn and J. B. Lightfoot had established a scholarly consensus that the Short Recension is merely a summary of the text of the Middle Recension, and was therefore composed later.", "title": "Epistles" }, { "paragraph_id": 26, "text": "Though the Catholic Church has always supported the authenticity of the letters, some Protestants have tended to deny the authenticity of all the epistles because they seem to attest to a monarchical episcopate in the second century. John Calvin called the epistles \"rubbish published under Ignatius' name.\"", "title": "Epistles" }, { "paragraph_id": 27, "text": "In 1886, Presbyterian minister and church historian William Dool Killen published a long essay attacking the authenticity of the epistles attributed to Ignatius. He argued that Callixtus, bishop of Rome, forged the letters around AD 220 to garner support for a monarchical episcopate, modeling the renowned Saint Ignatius after his own life to give precedent for his own authority. Killen contrasted this episcopal polity with the presbyterian polity in the writings of Polycarp.", "title": "Epistles" }, { "paragraph_id": 28, "text": "Some doubts about the letters' authenticity continued into the 20th century. In the 1970s and 1980s, the scholars Robert Joly, Reinhard Hübner, Markus Vinzent, and Thomas Lechner argued forcefully that the epistles of the Middle Recension were forgeries from the reign of Marcus Aurelius (161–180 AD). Joseph Ruis-Camps published a study arguing that the Middle Recension letters were pseudepigraphically composed based on an original, smaller, authentic corpus of four letters (Romans, Magnesians, Trallians, and Ephesians). In 2009, Otto Zwierlein support the thesis of a forgery written around 170 AD.", "title": "Epistles" }, { "paragraph_id": 29, "text": "These publications stirred up heated scholarly controversy, but by 2017, most patristic scholars accepted the authenticity of the seven original epistles. However, J. Lookadoo said in 2020 that \"the debate has received renewed energy since the late 1990s and shows few signs of slowing.\"", "title": "Epistles" }, { "paragraph_id": 30, "text": "The original texts of six of the seven original letters are found in the Codex Mediceo Laurentianus, written in Greek in the 11th century (which also contains the pseudepigraphical letters of the Long Recension, except that to the Philippians), while the letter to the Romans is found in the Codex Colbertinus.", "title": "Epistles" }, { "paragraph_id": 31, "text": "Ignatius's letters bear signs of being written in great haste, such as run-on sentences and an unsystematic succession of thought. Ignatius modelled them after the biblical epistles of Paul, Peter, and John, quoting or paraphrasing these apostles' works freely. For example, in his letter to the Ephesians he quoted 1 Corinthians 1:18:", "title": "Epistles" }, { "paragraph_id": 32, "text": "Let my spirit be counted as nothing for the sake of the cross, which is a stumbling-block to those that do not believe, but to us salvation and life eternal.", "title": "Epistles" }, { "paragraph_id": 33, "text": "Ignatius is known to have taught the deity of Christ:", "title": "Theology" }, { "paragraph_id": 34, "text": "There is one Physician who is possessed both of flesh and spirit; both made and not made; God existing in flesh; true life in death; both of Mary and of God; first passible and then impassible, even Jesus Christ our Lord.", "title": "Theology" }, { "paragraph_id": 35, "text": "The same section in text of the Long Recension says the following:", "title": "Theology" }, { "paragraph_id": 36, "text": "But our Physician is the Only true God, the unbegotten and unapproachable, the Lord of all, the Father and Begetter of the only-begotten Son. We have also as a Physician the Lord our God, Jesus the Christ, the only-begotten Son and Word, before time began, but who afterwards became also man, of Mary the virgin. For \"the Word was made flesh.\" Being incorporeal, He was in the body, being impassible, He was in a passible body, being immortal, He was in a mortal body, being life, He became subject to corruption, that He might free our souls from death and corruption, and heal them, and might restore them to health, when they were diseased with ungodliness and wicked lusts.", "title": "Theology" }, { "paragraph_id": 37, "text": "He stressed the value of the Eucharist, calling it a \"medicine of immortality\" (Ignatius to the Ephesians 20:2). The very strong desire for bloody martyrdom in the arena, which Ignatius expresses rather graphically in places, may seem quite odd to the modern reader. An examination of his theology of soteriology shows that he regarded salvation as one being free from the powerful fear of death and thus to face martyrdom bravely.", "title": "Theology" }, { "paragraph_id": 38, "text": "Ignatius is claimed to be the first known Christian writer to argue in favor of Christianity's replacement of the Sabbath with the Lord's Day:", "title": "Theology" }, { "paragraph_id": 39, "text": "Be not seduced by strange doctrines nor by antiquated fables, which are profitless. For if even unto this day we live after the manner of Judaism, we avow that we have not received grace. ...If then those who had walked in ancient practices attained unto newness of hope, no longer observing Sabbaths but fashioning their lives after the Lord's day, on which our life also arose through Him ... how shall we be able to live apart from Him?", "title": "Theology" }, { "paragraph_id": 40, "text": "If, therefore, those who were brought up in the ancient order of things have come to the possession of a new hope, no longer observing the Sabbath, but living in the observance of the Lord's day, on which also our life has sprung up again by Him and by His death—whom some deny, by which mystery we have obtained faith, and therefore endure, that we may be found the disciples of Jesus Christ, our only Master—how shall we be able to live apart from Him, whose disciples the prophets themselves in the Spirit did wait for Him as their Teacher? And therefore He whom they rightly waited for, being come, raised them from the dead.", "title": "Theology" }, { "paragraph_id": 41, "text": "This passage has provoked textual debate since the only Greek manuscript extant read Κατα κυριακήν ζωήν ζωντες which could be translated \"living according to the Lord's life.\" Most scholars, however, have followed the Latin text (secundum dominicam) omitting ζωήν and translating \"living according to Lord's Day\".", "title": "Theology" }, { "paragraph_id": 42, "text": "Ignatius is the earliest known Christian writer to emphasize loyalty to a single bishop in each city (or diocese) who is assisted by both presbyters (elders) and deacons. Earlier writings only mention either bishops or presbyters.", "title": "Theology" }, { "paragraph_id": 43, "text": "For instance, his writings on bishops, presbyters and deacons:", "title": "Theology" }, { "paragraph_id": 44, "text": "Take care to do all things in harmony with God, with the bishop presiding in the place of God, and with the presbyters in the place of the council of the apostles, and with the deacons, who are most dear to me, entrusted with the business of Jesus Christ, who was with the Father from the beginning and is at last made manifest.", "title": "Theology" }, { "paragraph_id": 45, "text": "He is also responsible for the first known use of the Greek word katholikos (καθολικός), or catholic, meaning \"universal\", \"complete\", \"general\", and/or \"whole\" to describe the Church, writing:", "title": "Theology" }, { "paragraph_id": 46, "text": "Wherever the bishop appears, there let the people be; as wherever Jesus Christ is, there is the Catholic Church. It is not lawful to baptize or give communion without the consent of the bishop. On the other hand, whatever has his approval is pleasing to God. Thus, whatever is done will be safe and valid.", "title": "Theology" }, { "paragraph_id": 47, "text": "Anglican bishop and theologian Joseph Lightfoot states the word \"catholic (καθόλου)\" simply means \"universal\" (cf \"Roman Catholic\" in the anachronistic modern sense of the particular religion), having a wide range of applications in the English language (thus requiring context to properly translate this word each time it is used, rather than to merely leave it transliterated), and can be found not only before and after Ignatius amongst ecclesiastical and classical writers, but centuries before the Christian era. Ignatius of Antioch is also attributed the earliest recorded use of the term \"Christianity\" (Greek: Χριστιανισμός) c. 100 AD.", "title": "Theology" }, { "paragraph_id": 48, "text": "Several scholars have noted that there are striking similarities between Ignatius and the Christian-turned-Cynic philosopher Peregrinus Proteus, who is satirized by Lucian in The Passing of Peregrinus:", "title": "Parallels with Peregrinus Proteus" }, { "paragraph_id": 49, "text": "It is generally believed that these parallels are the result of Lucian intentionally copying traits from Ignatius and applying them to his satire of Peregrinus. If the dependence of Lucian on the Ignatian epistles is accepted, then this places an upper limit on the date of the epistles during the 160s AD, just before The Passing of Peregrinus was written.", "title": "Parallels with Peregrinus Proteus" }, { "paragraph_id": 50, "text": "In 1892, Daniel Völter sought to explain the parallels by proposing that the Ignatian epistles were in fact written by Peregrinus, and later attributed to the saint, but this speculative theory has failed to make a significant impact on the academic community.", "title": "Parallels with Peregrinus Proteus" }, { "paragraph_id": 51, "text": "Epistles attributed to Saint Ignatius, but of spurious origin (their author is often called Pseudo-Ignatius in English) include:", "title": "Pseudo-Ignatius" } ]
Ignatius of Antioch, also known as Ignatius Theophorus, was an early Christian writer and Patriarch of Antioch. While en route to Rome, where he met his martyrdom, Ignatius wrote a series of letters. This correspondence forms a central part of a later collection of works by the Apostolic Fathers. He is considered one of the three most important of these, together with Clement of Rome and Polycarp. His letters also serve as an example of early Christian theology, and address important topics including ecclesiology, the sacraments, and the role of bishops.
2002-02-25T15:51:15Z
2023-12-22T15:05:55Z
[ "Template:Wikisource author", "Template:Commons category", "Template:History of Catholic theology", "Template:Em", "Template:Refend", "Template:Cite news", "Template:Refbegin", "Template:Librivox author", "Template:S-rel", "Template:Authority control", "Template:Ignatius of Antioch", "Template:Citation", "Template:Circa", "Template:Ws", "Template:S-bef", "Template:S-ttl", "Template:S-end", "Template:Christian History", "Template:Lang-grc-gre", "Template:Better source needed", "Template:Internet Archive author", "Template:S-aft", "Template:Lang-grc", "Template:Refn", "Template:Blockquote", "Template:Section link", "Template:Lang-el", "Template:ISBN", "Template:Cite book", "Template:Cite journal", "Template:Short description", "Template:IPAc-en", "Template:Patriarchs of Antioch", "Template:Portal", "Template:Reflist", "Template:S-start", "Template:Infobox saint", "Template:Rp", "Template:Catholic saints", "Template:Cite web", "Template:Wikiquote" ]
https://en.wikipedia.org/wiki/Ignatius_of_Antioch
15,437
ITU prefix
The International Telecommunication Union (ITU) allocates call sign prefixes for radio and television stations of all types. They also form the basis for, but may not exactly match, aircraft registration identifiers. These prefixes are agreed upon internationally, and are a form of country code. A call sign can be any number of letters and numerals but each country must only use call signs that begin with the characters allocated for use in that country. With regard to the second and/or third letters in the prefixes in the list below, if the country in question is allocated all callsigns with A to Z in that position, then that country can also use call signs with the digits 0 to 9 in that position. For example, the United States is assigned KA–KZ, and therefore can also use prefixes like K1 or K9. While ITU prefix rules are adhered to in the context of international broadcasting, including amateur radio, it is rarer for countries to assign broadcast call signs to conventional AM, FM, and television stations with purely domestic reach; the United States, Canada, Mexico, Japan, South Korea, and the Philippines are among those that do. Canada presents one notable exception to the ITU prefix rules: Since 1936, it has used CB for its own Canadian Broadcasting Corporation stations, whereas Chile is officially assigned the CB prefix. Innovation, Science and Economic Development Canada's broadcasting rules indicate this is through a "special arrangement", without elaborating. In any case, the two countries are geographically separate enough to prevent confusion; Canada's shortwave broadcasters and amateur radio stations have always used one of its assigned ITU prefixes. Unallocated: The following call sign prefixes are available for future allocation by the ITU. (x represents any letter; n represents any digit from 2–9.) (* Indicates a prefix that has recently been returned to the ITU.) Unavailable: Under present ITU guidelines the following call sign prefixes shall not be allocated. They are sometimes used unofficially – such as amateur radio operators operating in a disputed territory or in a nation state that has no official prefix (e.g. S0 in Western Sahara, station 1A0 at Knights of Malta headquarters in Rome, or station 1L in Liberland). (x represents any letter; n represents any digit from 2–9.) Linked country codes are from ISO 3166-1. Series allocated to an international organization. Provisional allocation in accordance with No. S19.33: "Between radiocommunication conferences, the Secretary-General is authorized to deal with questions relating to changes in the allocation of series of call signs, on a provisional basis, and subject to confirmation by the following conference." Half series allocation. The first country listed uses all callsigns beginning with the listed prefix followed by A-M, and the second country listed uses N-Z.
[ { "paragraph_id": 0, "text": "The International Telecommunication Union (ITU) allocates call sign prefixes for radio and television stations of all types. They also form the basis for, but may not exactly match, aircraft registration identifiers. These prefixes are agreed upon internationally, and are a form of country code. A call sign can be any number of letters and numerals but each country must only use call signs that begin with the characters allocated for use in that country.", "title": "" }, { "paragraph_id": 1, "text": "With regard to the second and/or third letters in the prefixes in the list below, if the country in question is allocated all callsigns with A to Z in that position, then that country can also use call signs with the digits 0 to 9 in that position. For example, the United States is assigned KA–KZ, and therefore can also use prefixes like K1 or K9.", "title": "" }, { "paragraph_id": 2, "text": "While ITU prefix rules are adhered to in the context of international broadcasting, including amateur radio, it is rarer for countries to assign broadcast call signs to conventional AM, FM, and television stations with purely domestic reach; the United States, Canada, Mexico, Japan, South Korea, and the Philippines are among those that do. Canada presents one notable exception to the ITU prefix rules: Since 1936, it has used CB for its own Canadian Broadcasting Corporation stations, whereas Chile is officially assigned the CB prefix. Innovation, Science and Economic Development Canada's broadcasting rules indicate this is through a \"special arrangement\", without elaborating. In any case, the two countries are geographically separate enough to prevent confusion; Canada's shortwave broadcasters and amateur radio stations have always used one of its assigned ITU prefixes.", "title": "" }, { "paragraph_id": 3, "text": "Unallocated: The following call sign prefixes are available for future allocation by the ITU. (x represents any letter; n represents any digit from 2–9.)", "title": "Unallocated and unavailable call sign prefixes" }, { "paragraph_id": 4, "text": "(* Indicates a prefix that has recently been returned to the ITU.)", "title": "Unallocated and unavailable call sign prefixes" }, { "paragraph_id": 5, "text": "Unavailable: Under present ITU guidelines the following call sign prefixes shall not be allocated. They are sometimes used unofficially – such as amateur radio operators operating in a disputed territory or in a nation state that has no official prefix (e.g. S0 in Western Sahara, station 1A0 at Knights of Malta headquarters in Rome, or station 1L in Liberland). (x represents any letter; n represents any digit from 2–9.)", "title": "Unallocated and unavailable call sign prefixes" }, { "paragraph_id": 6, "text": "Linked country codes are from ISO 3166-1.", "title": "Allocation table" }, { "paragraph_id": 7, "text": "Series allocated to an international organization. Provisional allocation in accordance with No. S19.33: \"Between radiocommunication conferences, the Secretary-General is authorized to deal with questions relating to changes in the allocation of series of call signs, on a provisional basis, and subject to confirmation by the following conference.\" Half series allocation. The first country listed uses all callsigns beginning with the listed prefix followed by A-M, and the second country listed uses N-Z.", "title": "Allocation table" } ]
The International Telecommunication Union (ITU) allocates call sign prefixes for radio and television stations of all types. They also form the basis for, but may not exactly match, aircraft registration identifiers. These prefixes are agreed upon internationally, and are a form of country code. A call sign can be any number of letters and numerals but each country must only use call signs that begin with the characters allocated for use in that country. With regard to the second and/or third letters in the prefixes in the list below, if the country in question is allocated all callsigns with A to Z in that position, then that country can also use call signs with the digits 0 to 9 in that position. For example, the United States is assigned KA–KZ, and therefore can also use prefixes like K1 or K9. While ITU prefix rules are adhered to in the context of international broadcasting, including amateur radio, it is rarer for countries to assign broadcast call signs to conventional AM, FM, and television stations with purely domestic reach; the United States, Canada, Mexico, Japan, South Korea, and the Philippines are among those that do. Canada presents one notable exception to the ITU prefix rules: Since 1936, it has used CB for its own Canadian Broadcasting Corporation stations, whereas Chile is officially assigned the CB prefix. Innovation, Science and Economic Development Canada's broadcasting rules indicate this is through a "special arrangement", without elaborating. In any case, the two countries are geographically separate enough to prevent confusion; Canada's shortwave broadcasters and amateur radio stations have always used one of its assigned ITU prefixes.
2001-12-21T23:45:49Z
2023-12-25T20:42:32Z
[ "Template:Use dmy dates", "Template:Further", "Template:Gridded chart of ITU prefixes", "Template:ITU prefixes by nation", "Template:Cite web", "Template:Reflist", "Template:Telecommunications", "Template:Short description" ]
https://en.wikipedia.org/wiki/ITU_prefix
15,440
IBM PC keyboard
The keyboard for IBM PC-compatible computers is standardized. However, during the more than 30 years of PC architecture being frequently updated, many keyboard layout variations have been developed. A well-known class of IBM PC keyboards is the Model M. Introduced in 1984 and manufactured by IBM, Lexmark, Maxi-Switch and Unicomp, the vast majority of Model M keyboards feature a buckling spring key design and many have fully swappable keycaps. The PC keyboard changed over the years, often at the launch of new IBM PC versions. Common additions to the standard layouts include additional power management keys, volume controls, media player controls (e.g. "Play/Pause", "Previous track", "Next track) and miscellaneous user-configurable shortcuts for email client, World Wide Web browser, etc. The IBM PC layout, particularly the Model M, has been extremely influential, and today most keyboards use some variant of it. This has caused problems for applications developed with alternative layouts, which require keys that are in awkward positions on the Model M layout – often requiring the pinkie to operate – and thus require remapping for comfortable use. One notable example is the escape key, used by the vi editor: on the ADM-3A terminal this was located where the Tab key is on the IBM PC, but on the IBM PC the Escape key is in the corner; this is typically solved by remapping Caps Lock to Escape. Another example is the Emacs editor, which makes extensive use of modifier keys, and uses the Control key more than the meta key (IBM PC instead has the Alt key) – these date to the Knight keyboard, which had the Control key on the inside of the Meta key, opposite to the Model M, where it is on the outside of the Alt key; and to the space-cadet keyboard, where the four bucky bit keys (Control, Meta, Super, Hyper) are in a row, allowing easy chording to press several, unlike on the Model M layout. This results in the "Emacs pinky" problem. Although PC Magazine praised most aspects of the 1981 IBM PC keyboard's hardware design, it questioned "how IBM, that ultimate pro of keyboard manufacture, could put the left-hand shift key at the awkward reach they did". The magazine reported in 1982 that it received more letters to its "Wish List" column asking for the ability to determine the status of the three lock keys than on any other topic. Byte columnist Jerry Pournelle described the keyboard as "infuriatingly excellent". He praised its feel but complained that the Shift and other keys' locations were "enough to make a saint weep", and denounced the trend of PC compatible computers to emulate the layout but not the feel. He reported that the layout "nearly drove" science-fiction editor Jim Baen "crazy", and that "many of [Baen's] authors refused to work with that keyboard" so could not submit manuscripts in a compatible format. The magazine's official review was more sanguine. It praised the keyboard as "bar none, the best ... on any microcomputer" and described the unusual Shift key locations as "minor [problems] compared to some of the gigantic mistakes made on almost every other microcomputer keyboard". "I wasn't thrilled with the placement of [the left Shift and Return] keys, either", IBM's Don Estridge stated in 1983. He defended the layout, however, stating that "every place you pick to put them is not a good place for somebody ... there's no consensus", and claimed that "if we were to change it now we would be in hot water". The PC keyboard with its various keys has a long history of evolution reaching back to teletypewriters. In addition to the 'old' standard keys, the PC keyboard has accumulated several special keys over the years. Some of the additions have been inspired by the opportunity or requirement for improving user productivity with general office application software, while other slightly more general keyboard additions have become the factory standards after being introduced by certain operating system or GUI software vendors such as Microsoft.
[ { "paragraph_id": 0, "text": "The keyboard for IBM PC-compatible computers is standardized. However, during the more than 30 years of PC architecture being frequently updated, many keyboard layout variations have been developed.", "title": "" }, { "paragraph_id": 1, "text": "A well-known class of IBM PC keyboards is the Model M. Introduced in 1984 and manufactured by IBM, Lexmark, Maxi-Switch and Unicomp, the vast majority of Model M keyboards feature a buckling spring key design and many have fully swappable keycaps.", "title": "" }, { "paragraph_id": 2, "text": "The PC keyboard changed over the years, often at the launch of new IBM PC versions.", "title": "Keyboard layouts" }, { "paragraph_id": 3, "text": "Common additions to the standard layouts include additional power management keys, volume controls, media player controls (e.g. \"Play/Pause\", \"Previous track\", \"Next track) and miscellaneous user-configurable shortcuts for email client, World Wide Web browser, etc.", "title": "Keyboard layouts" }, { "paragraph_id": 4, "text": "The IBM PC layout, particularly the Model M, has been extremely influential, and today most keyboards use some variant of it. This has caused problems for applications developed with alternative layouts, which require keys that are in awkward positions on the Model M layout – often requiring the pinkie to operate – and thus require remapping for comfortable use. One notable example is the escape key, used by the vi editor: on the ADM-3A terminal this was located where the Tab key is on the IBM PC, but on the IBM PC the Escape key is in the corner; this is typically solved by remapping Caps Lock to Escape. Another example is the Emacs editor, which makes extensive use of modifier keys, and uses the Control key more than the meta key (IBM PC instead has the Alt key) – these date to the Knight keyboard, which had the Control key on the inside of the Meta key, opposite to the Model M, where it is on the outside of the Alt key; and to the space-cadet keyboard, where the four bucky bit keys (Control, Meta, Super, Hyper) are in a row, allowing easy chording to press several, unlike on the Model M layout. This results in the \"Emacs pinky\" problem.", "title": "Keyboard layouts" }, { "paragraph_id": 5, "text": "Although PC Magazine praised most aspects of the 1981 IBM PC keyboard's hardware design, it questioned \"how IBM, that ultimate pro of keyboard manufacture, could put the left-hand shift key at the awkward reach they did\". The magazine reported in 1982 that it received more letters to its \"Wish List\" column asking for the ability to determine the status of the three lock keys than on any other topic. Byte columnist Jerry Pournelle described the keyboard as \"infuriatingly excellent\". He praised its feel but complained that the Shift and other keys' locations were \"enough to make a saint weep\", and denounced the trend of PC compatible computers to emulate the layout but not the feel. He reported that the layout \"nearly drove\" science-fiction editor Jim Baen \"crazy\", and that \"many of [Baen's] authors refused to work with that keyboard\" so could not submit manuscripts in a compatible format. The magazine's official review was more sanguine. It praised the keyboard as \"bar none, the best ... on any microcomputer\" and described the unusual Shift key locations as \"minor [problems] compared to some of the gigantic mistakes made on almost every other microcomputer keyboard\".", "title": "Reception" }, { "paragraph_id": 6, "text": "\"I wasn't thrilled with the placement of [the left Shift and Return] keys, either\", IBM's Don Estridge stated in 1983. He defended the layout, however, stating that \"every place you pick to put them is not a good place for somebody ... there's no consensus\", and claimed that \"if we were to change it now we would be in hot water\".", "title": "Reception" }, { "paragraph_id": 7, "text": "The PC keyboard with its various keys has a long history of evolution reaching back to teletypewriters. In addition to the 'old' standard keys, the PC keyboard has accumulated several special keys over the years. Some of the additions have been inspired by the opportunity or requirement for improving user productivity with general office application software, while other slightly more general keyboard additions have become the factory standards after being introduced by certain operating system or GUI software vendors such as Microsoft.", "title": "Standard key meanings" } ]
The keyboard for IBM PC-compatible computers is standardized. However, during the more than 30 years of PC architecture being frequently updated, many keyboard layout variations have been developed. A well-known class of IBM PC keyboards is the Model M. Introduced in 1984 and manufactured by IBM, Lexmark, Maxi-Switch and Unicomp, the vast majority of Model M keyboards feature a buckling spring key design and many have fully swappable keycaps.
2001-12-22T18:26:07Z
2023-12-08T04:17:25Z
[ "Template:Key", "Template:Cite book", "Template:Cite web", "Template:Webarchive", "Template:Refimprove", "Template:0", "Template:See also", "Template:Citation needed", "Template:Cite news", "Template:Key press", "Template:Keyboard", "Template:Reflist", "Template:Cite magazine" ]
https://en.wikipedia.org/wiki/IBM_PC_keyboard
15,441
Italian battleship Giulio Cesare
Giulio Cesare was one of three Conte di Cavour-class dreadnought battleships built for the Royal Italian Navy (Regia Marina) in the 1910s. Completed in 1914, she was little used and saw no combat during the First World War. The ship supported operations during the Corfu Incident in 1923 and spent much of the rest of the decade in reserve. She was rebuilt between 1933 and 1937 with more powerful guns, additional armor and considerably more speed than before. During World War II, both Giulio Cesare and her sister ship, Conte di Cavour, participated in the Battle of Calabria in July 1940, when the former was lightly damaged. They were both present when British torpedo bombers attacked the fleet at Taranto in November 1940, but Giulio Cesare was not damaged. She escorted several convoys to North Africa and participated in the Battle of Cape Spartivento in late 1940 and the First Battle of Sirte in late 1941. She was designated as a training ship in early 1942, and escaped to Malta after the Italian armistice the following year. The ship was transferred to the Soviet Union in 1949 and renamed Novorossiysk (Новороссийск). The Soviets also used her for training until she was sunk in 1955, with the loss of 617 men, by an explosion most likely caused by an old German mine. She was salvaged the following year and later scrapped. The Conte di Cavour class was designed to counter the French Courbet-class dreadnoughts which caused them to be slower and more heavily armored than the first Italian dreadnought, Dante Alighieri. The ships were 168.9 meters (554 ft 2 in) long at the waterline and 176 meters (577 ft 5 in) overall. They had a beam of 28 meters (91 ft 10 in), and a draft of 9.3 meters (30 ft 6 in). The Conte di Cavour-class ships displaced 23,088 long tons (23,458 t) at normal load, and 25,086 long tons (25,489 t) at deep load. They had a crew of 31 officers and 969 enlisted men. The ships were powered by three sets of Parsons steam turbines, two sets driving the outer propeller shafts and one set the two inner shafts. Steam for the turbines was provided by 24 Babcock & Wilcox boilers, half of which burned fuel oil and the other half burning both oil and coal. Designed to reach a maximum speed of 22.5 knots (41.7 km/h; 25.9 mph) from 31,000 shaft horsepower (23,000 kW), Giulio Cesare failed to reach this goal on her sea trials, reaching only 21.56 knots (39.9 km/h; 24.8 mph) from 30,700 shp (22,900 kW). The ships carried enough coal and oil to give them a range of 4,800 nautical miles (8,900 km; 5,500 mi) at 10 knots (19 km/h; 12 mph). The main battery of the Conte di Cavour class consisted of thirteen 305-millimeter Model 1909 guns, in five centerline gun turrets, with a twin-gun turret superfiring over a triple-gun turret in fore and aft pairs, and a third triple turret amidships. Their secondary armament consisted of eighteen 120-millimeter (4.7 in) guns mounted in casemates on the sides of the hull. For defense against torpedo boats, the ships carried fourteen 76.2-millimeter (3 in) guns; thirteen of these could be mounted on the turret tops, but they could be positioned in 30 different locations, including some on the forecastle and upper decks. They were also fitted with three submerged 450-millimeter (17.7 in) torpedo tubes, one on each broadside and the third in the stern. The Conte di Cavour-class ships had a complete waterline armor belt that had a maximum thickness of 250 millimeters (9.8 in) amidships, which reduced to 130 millimeters (5.1 in) towards the stern and 80 millimeters (3.1 in) towards the bow. They had two armored decks: the main deck was 24 mm (0.94 in) thick on the flat that increased to 40 millimeters (1.6 in) on the slopes that connected it to the main belt. The second deck was 30 millimeters (1.2 in) thick. Frontal armor of the gun turrets was 280 millimeters (11 in) in thickness and the sides were 240 millimeters (9.4 in) thick. The armor protecting their barbettes ranged in thickness from 130 to 230 millimeters (5.1 to 9.1 in). The walls of the forward conning tower were 280 millimeters thick. Shortly after the end of World War I, the number of 76.2 mm guns was reduced to 13, all mounted on the turret tops, and six new 76.2-millimeter anti-aircraft (AA) guns were installed abreast the aft funnel. In addition two license-built 2-pounder (1.6 in (40 mm)) AA guns were mounted on the forecastle deck. In 1925–1926 the foremast was replaced by a four-legged (tetrapodal) mast, which was moved forward of the funnels, the rangefinders were upgraded, and the ship was equipped to handle a Macchi M.18 seaplane mounted on the amidships turret. Around that same time, either one or both of the ships was equipped with a fixed aircraft catapult on the port side of the forecastle. Giulio Cesare began an extensive reconstruction in October 1933 at the Cantieri del Tirreno shipyard in Genoa that lasted until October 1937. A new bow section was grafted over the existing bow which increased her length by 10.31 meters (33 ft 10 in) to 186.4 meters (611 ft 7 in) and her beam increased to 28.6 meters (93 ft 10 in). The ship's draft at deep load increased to 10.42 meters (34 ft 2 in). All of the changes made increased her displacement to 26,140 long tons (26,560 t) at standard load and 29,100 long tons (29,600 t) at deep load. The ship's crew increased to 1,260 officers and enlisted men. Two of the propeller shafts were removed and the existing turbines were replaced by two Belluzzo geared steam turbines rated at 75,000 shp (56,000 kW). The boilers were replaced by eight Yarrow boilers. On her sea trials in December 1936, before her reconstruction was fully completed, Giulio Cesare reached a speed of 28.24 knots (52.30 km/h; 32.50 mph) from 93,430 shp (69,670 kW). In service her maximum speed was about 27 knots (50 km/h; 31 mph) and she had a range of 6,400 nautical miles (11,900 km; 7,400 mi) at a speed of 13 knots (24 km/h; 15 mph). The main guns were bored out to 320 mm (12.6 in) and the center turret and the torpedo tubes were removed. All of the existing secondary armament and AA guns were replaced by a dozen 120 mm guns in six twin-gun turrets and eight 100 mm (4 in) AA guns in twin turrets. In addition the ship was fitted with a dozen Breda 37-millimeter (1.5 in) light AA guns in six twin-gun mounts and twelve 13.2-millimeter (0.52 in) Breda M31 anti-aircraft machine guns, also in twin mounts. In 1940 the 13.2 mm machine guns were replaced by 20 mm (0.79 in) AA guns in twin mounts. Giulio Cesare received two more twin mounts as well as four additional 37 mm guns in twin mounts on the forecastle between the two turrets in 1941. The tetrapodal mast was replaced with a new forward conning tower, protected with 260-millimeter (10.2 in) thick armor. Atop the conning tower there was a fire-control director fitted with two large stereo-rangefinders, with a base length of 7.2 meters (23.6 ft). The deck armor was increased during the reconstruction to a total of 135 millimeters (5.3 in) over the engine and boiler rooms and 166 millimeters (6.5 in) over the magazines, although its distribution over three decks, meant that it was considerably less effective than a single plate of the same thickness. The armor protecting the barbettes was reinforced with 50-millimeter (2 in) plates. All this armor weighed a total of 3,227 long tons (3,279 t). The existing underwater protection was replaced by the Pugliese torpedo defense system that consisted of a large cylinder surrounded by fuel oil or water that was intended to absorb the blast of a torpedo warhead. It lacked, however, enough depth to be fully effective against contemporary torpedoes. A major problem of the reconstruction was that the ship's increased draft meant that their waterline armor belt was almost completely submerged with any significant load. Giulio Cesare, named after Julius Caesar, was laid down at the Gio. Ansaldo & C. shipyard in Genoa on 24 June 1910 and launched on 15 October 1911. She was completed on 14 May 1914 and served as a flagship in the southern Adriatic Sea during World War I. She saw no action, however, and spent little time at sea. Admiral Paolo Thaon di Revel, the Italian naval chief of staff, believed that Austro-Hungarian submarines and minelayers could operate too effectively in the narrow waters of the Adriatic. The threat from these underwater weapons to his capital ships was too serious for him to use the fleet in an active way. Instead, Revel decided to implement a blockade at the relatively safer southern end of the Adriatic with the battle fleet, while smaller vessels, such as the MAS torpedo boats, conducted raids on Austro-Hungarian ships and installations. Meanwhile, Revel's battleships would be preserved to confront the Austro-Hungarian battle fleet in the event that it sought a decisive engagement. Giulio Cesare made port visits in the Levant in 1919 and 1920. Both Giulio Cesare and Conte di Cavour supported Italian operations on Corfu in 1923 after an Italian general and his staff were murdered at the Greek–Albanian frontier; Benito Mussolini, who had been looking for a pretext to seize Corfu, ordered Italian troops to occupy the island. Cesare became a gunnery training ship in 1928, after having been in reserve since 1926. She was reconstructed at Cantieri del Tirreno, Genoa, between 1933 and 1937. Both ships participated in a naval review by Adolf Hitler in the Bay of Naples in May 1938 and covered the invasion of Albania in May 1939. Early in World War II, the ship took part in the Battle of Calabria (also known as the Battle of Punta Stilo), together with Conte di Cavour, on 9 July 1940, as part of the 1st Battle Squadron, commanded by Admiral Inigo Campioni, during which she engaged major elements of the British Mediterranean Fleet. The British were escorting a convoy from Malta to Alexandria, while the Italians had finished escorting another from Naples to Benghazi, Libya. Admiral Andrew Cunningham, commander of the Mediterranean Fleet, attempted to interpose his ships between the Italians and their base at Taranto. Crew on the fleets spotted each other in the middle of the afternoon and the battleships opened fire at 15:53 at a range of nearly 27,000 meters (29,000 yd). The two leading British battleships, HMS Warspite and Malaya, replied a minute later. Three minutes after she opened fire, shells from Giulio Cesare began to straddle Warspite which made a small turn and increased speed, to throw off the Italian ship's aim, at 16:00. Some rounds fired by Giulio Cesare overshot Warspite and near-missed the destroyers HMS Decoy and Hereward, puncturing their superstructures with splinters. At that same time, a shell from Warspite struck Giulio Cesare at a distance of about 24,000 meters (26,000 yd). The shell pierced the rear funnel and detonated inside it, blowing out a hole nearly 6.1 meters (20 ft) across. Fragments started several fires and their smoke was drawn into the boiler rooms, forcing four boilers off-line as their operators could not breathe. This reduced the ship's speed to 18 knots (33 km/h; 21 mph). Uncertain how severe the damage was, Campioni ordered his battleships to turn away in the face of superior British numbers and they successfully disengaged. Repairs to Giulio Cesare were completed by the end of August and both ships unsuccessfully attempted to intercept British convoys to Malta in August and September. On the night of 11 November 1940, Giulio Cesare and the other Italian battleships were at anchor in Taranto harbor when they were attacked by 21 Fairey Swordfish torpedo bombers from the British aircraft carrier HMS Illustrious, along with several other warships. One torpedo sank Conte di Cavour in shallow water, but Giulio Cesare was not hit during the attack. She participated in the Battle of Cape Spartivento on 27 November 1940, but never got close enough to any British ships to fire at them. The ship was damaged in January 1941 by splinters from a near miss during an air raid on Naples by Vickers Wellington bombers of the Royal Air Force; repairs at Genoa were completed in early February. On 8 February, she sailed from to the Straits of Bonifacio to intercept what the Italians thought was a Malta convoy, but was actually a raid on Genoa. She failed to make contact with any British forces. She participated in the First Battle of Sirte on 17 December 1941, providing distant cover for a convoy bound for Libya, and briefly engaging the escort force of a British convoy. She also provided distant cover for another convoy to North Africa in early January 1942. Giulio Cesare was reduced to a training ship afterwards at Taranto and later Pola. After the Italian surrender on 8 September 1943, she steamed to Taranto, putting down a mutiny and enduring an ineffective attack by five German aircraft en route. She then sailed for Malta where she arrived on 12 September to be interned. The ship remained there until 17 June 1944 when she returned to Taranto where she remained for the next four years. After the war, Giulio Cesare was allocated to the Soviet Union as part of war reparations. She was moved to Augusta, Sicily, on 9 December 1948, where an unsuccessful attempt was made at sabotage. The ship was stricken from the naval register on 15 December and turned over to the Soviets on 6 February 1949 under the temporary name of Z11 in Vlorë, Albania. She was renamed Novorossiysk, after the Soviet city of that name on the Black Sea. The Soviets used her as a training ship, and gave her eight refits. In 1953, all Italian light AA guns were replaced by eighteen 37 mm 70-K AA guns in six twin mounts and six singles. Also replaced were her fire-control systems and radars. This was intended as a temporary rearmament, as the Soviets drew up plans to replace her secondary 120mm mounts with the 130mm/58 SM-2 that was in development, and the 100mm and 37mm guns with 8 quadruple 45mm. While at anchor in Sevastopol on the night of 28/29 October 1955, an explosion ripped a 4-by-14-meter (13 by 46 ft) hole in the forecastle forward of 'A' turret. The flooding could not be controlled, and she capsized with the loss of 617 men, including 61 men sent from other ships to assist. The cause of the explosion is still unclear. The official cause, regarded as the most probable, was a magnetic RMH or LMB bottom mine, laid by the Germans during World War II and triggered by the dragging of the battleship's anchor chain before mooring for the last time. Subsequent searches located 32 mines of these types, some of them within 50 meters (160 ft) of the explosion. The damage was consistent with an explosion of 1,000–1,200 kilograms (2,200–2,600 lb) of TNT, and more than one mine may have detonated. Other explanations for the ship's loss have been proposed, and the most popular of these is that she was sunk by Italian frogmen of the wartime special operations unit Decima Flottiglia MAS who – more than ten years after the cessation of hostilities – were either avenging the transfer of the former Italian battleship to the USSR or sinking it on behalf of NATO. Novorossiysk was stricken from the naval register on 24 February 1956, salvaged on 4 May 1957, and subsequently scrapped. 44°37′7″N 33°32′8″E / 44.61861°N 33.53556°E / 44.61861; 33.53556
[ { "paragraph_id": 0, "text": "Giulio Cesare was one of three Conte di Cavour-class dreadnought battleships built for the Royal Italian Navy (Regia Marina) in the 1910s. Completed in 1914, she was little used and saw no combat during the First World War. The ship supported operations during the Corfu Incident in 1923 and spent much of the rest of the decade in reserve. She was rebuilt between 1933 and 1937 with more powerful guns, additional armor and considerably more speed than before.", "title": "" }, { "paragraph_id": 1, "text": "During World War II, both Giulio Cesare and her sister ship, Conte di Cavour, participated in the Battle of Calabria in July 1940, when the former was lightly damaged. They were both present when British torpedo bombers attacked the fleet at Taranto in November 1940, but Giulio Cesare was not damaged. She escorted several convoys to North Africa and participated in the Battle of Cape Spartivento in late 1940 and the First Battle of Sirte in late 1941. She was designated as a training ship in early 1942, and escaped to Malta after the Italian armistice the following year. The ship was transferred to the Soviet Union in 1949 and renamed Novorossiysk (Новороссийск). The Soviets also used her for training until she was sunk in 1955, with the loss of 617 men, by an explosion most likely caused by an old German mine. She was salvaged the following year and later scrapped.", "title": "" }, { "paragraph_id": 2, "text": "The Conte di Cavour class was designed to counter the French Courbet-class dreadnoughts which caused them to be slower and more heavily armored than the first Italian dreadnought, Dante Alighieri. The ships were 168.9 meters (554 ft 2 in) long at the waterline and 176 meters (577 ft 5 in) overall. They had a beam of 28 meters (91 ft 10 in), and a draft of 9.3 meters (30 ft 6 in). The Conte di Cavour-class ships displaced 23,088 long tons (23,458 t) at normal load, and 25,086 long tons (25,489 t) at deep load. They had a crew of 31 officers and 969 enlisted men. The ships were powered by three sets of Parsons steam turbines, two sets driving the outer propeller shafts and one set the two inner shafts. Steam for the turbines was provided by 24 Babcock & Wilcox boilers, half of which burned fuel oil and the other half burning both oil and coal. Designed to reach a maximum speed of 22.5 knots (41.7 km/h; 25.9 mph) from 31,000 shaft horsepower (23,000 kW), Giulio Cesare failed to reach this goal on her sea trials, reaching only 21.56 knots (39.9 km/h; 24.8 mph) from 30,700 shp (22,900 kW). The ships carried enough coal and oil to give them a range of 4,800 nautical miles (8,900 km; 5,500 mi) at 10 knots (19 km/h; 12 mph).", "title": "Description" }, { "paragraph_id": 3, "text": "The main battery of the Conte di Cavour class consisted of thirteen 305-millimeter Model 1909 guns, in five centerline gun turrets, with a twin-gun turret superfiring over a triple-gun turret in fore and aft pairs, and a third triple turret amidships. Their secondary armament consisted of eighteen 120-millimeter (4.7 in) guns mounted in casemates on the sides of the hull. For defense against torpedo boats, the ships carried fourteen 76.2-millimeter (3 in) guns; thirteen of these could be mounted on the turret tops, but they could be positioned in 30 different locations, including some on the forecastle and upper decks. They were also fitted with three submerged 450-millimeter (17.7 in) torpedo tubes, one on each broadside and the third in the stern.", "title": "Description" }, { "paragraph_id": 4, "text": "The Conte di Cavour-class ships had a complete waterline armor belt that had a maximum thickness of 250 millimeters (9.8 in) amidships, which reduced to 130 millimeters (5.1 in) towards the stern and 80 millimeters (3.1 in) towards the bow. They had two armored decks: the main deck was 24 mm (0.94 in) thick on the flat that increased to 40 millimeters (1.6 in) on the slopes that connected it to the main belt. The second deck was 30 millimeters (1.2 in) thick. Frontal armor of the gun turrets was 280 millimeters (11 in) in thickness and the sides were 240 millimeters (9.4 in) thick. The armor protecting their barbettes ranged in thickness from 130 to 230 millimeters (5.1 to 9.1 in). The walls of the forward conning tower were 280 millimeters thick.", "title": "Description" }, { "paragraph_id": 5, "text": "Shortly after the end of World War I, the number of 76.2 mm guns was reduced to 13, all mounted on the turret tops, and six new 76.2-millimeter anti-aircraft (AA) guns were installed abreast the aft funnel. In addition two license-built 2-pounder (1.6 in (40 mm)) AA guns were mounted on the forecastle deck. In 1925–1926 the foremast was replaced by a four-legged (tetrapodal) mast, which was moved forward of the funnels, the rangefinders were upgraded, and the ship was equipped to handle a Macchi M.18 seaplane mounted on the amidships turret. Around that same time, either one or both of the ships was equipped with a fixed aircraft catapult on the port side of the forecastle.", "title": "Modifications and reconstruction" }, { "paragraph_id": 6, "text": "Giulio Cesare began an extensive reconstruction in October 1933 at the Cantieri del Tirreno shipyard in Genoa that lasted until October 1937. A new bow section was grafted over the existing bow which increased her length by 10.31 meters (33 ft 10 in) to 186.4 meters (611 ft 7 in) and her beam increased to 28.6 meters (93 ft 10 in). The ship's draft at deep load increased to 10.42 meters (34 ft 2 in). All of the changes made increased her displacement to 26,140 long tons (26,560 t) at standard load and 29,100 long tons (29,600 t) at deep load. The ship's crew increased to 1,260 officers and enlisted men. Two of the propeller shafts were removed and the existing turbines were replaced by two Belluzzo geared steam turbines rated at 75,000 shp (56,000 kW). The boilers were replaced by eight Yarrow boilers. On her sea trials in December 1936, before her reconstruction was fully completed, Giulio Cesare reached a speed of 28.24 knots (52.30 km/h; 32.50 mph) from 93,430 shp (69,670 kW). In service her maximum speed was about 27 knots (50 km/h; 31 mph) and she had a range of 6,400 nautical miles (11,900 km; 7,400 mi) at a speed of 13 knots (24 km/h; 15 mph).", "title": "Modifications and reconstruction" }, { "paragraph_id": 7, "text": "The main guns were bored out to 320 mm (12.6 in) and the center turret and the torpedo tubes were removed. All of the existing secondary armament and AA guns were replaced by a dozen 120 mm guns in six twin-gun turrets and eight 100 mm (4 in) AA guns in twin turrets. In addition the ship was fitted with a dozen Breda 37-millimeter (1.5 in) light AA guns in six twin-gun mounts and twelve 13.2-millimeter (0.52 in) Breda M31 anti-aircraft machine guns, also in twin mounts. In 1940 the 13.2 mm machine guns were replaced by 20 mm (0.79 in) AA guns in twin mounts. Giulio Cesare received two more twin mounts as well as four additional 37 mm guns in twin mounts on the forecastle between the two turrets in 1941. The tetrapodal mast was replaced with a new forward conning tower, protected with 260-millimeter (10.2 in) thick armor. Atop the conning tower there was a fire-control director fitted with two large stereo-rangefinders, with a base length of 7.2 meters (23.6 ft).", "title": "Modifications and reconstruction" }, { "paragraph_id": 8, "text": "The deck armor was increased during the reconstruction to a total of 135 millimeters (5.3 in) over the engine and boiler rooms and 166 millimeters (6.5 in) over the magazines, although its distribution over three decks, meant that it was considerably less effective than a single plate of the same thickness. The armor protecting the barbettes was reinforced with 50-millimeter (2 in) plates. All this armor weighed a total of 3,227 long tons (3,279 t). The existing underwater protection was replaced by the Pugliese torpedo defense system that consisted of a large cylinder surrounded by fuel oil or water that was intended to absorb the blast of a torpedo warhead. It lacked, however, enough depth to be fully effective against contemporary torpedoes. A major problem of the reconstruction was that the ship's increased draft meant that their waterline armor belt was almost completely submerged with any significant load.", "title": "Modifications and reconstruction" }, { "paragraph_id": 9, "text": "Giulio Cesare, named after Julius Caesar, was laid down at the Gio. Ansaldo & C. shipyard in Genoa on 24 June 1910 and launched on 15 October 1911. She was completed on 14 May 1914 and served as a flagship in the southern Adriatic Sea during World War I. She saw no action, however, and spent little time at sea. Admiral Paolo Thaon di Revel, the Italian naval chief of staff, believed that Austro-Hungarian submarines and minelayers could operate too effectively in the narrow waters of the Adriatic. The threat from these underwater weapons to his capital ships was too serious for him to use the fleet in an active way. Instead, Revel decided to implement a blockade at the relatively safer southern end of the Adriatic with the battle fleet, while smaller vessels, such as the MAS torpedo boats, conducted raids on Austro-Hungarian ships and installations. Meanwhile, Revel's battleships would be preserved to confront the Austro-Hungarian battle fleet in the event that it sought a decisive engagement.", "title": "Construction and service" }, { "paragraph_id": 10, "text": "Giulio Cesare made port visits in the Levant in 1919 and 1920. Both Giulio Cesare and Conte di Cavour supported Italian operations on Corfu in 1923 after an Italian general and his staff were murdered at the Greek–Albanian frontier; Benito Mussolini, who had been looking for a pretext to seize Corfu, ordered Italian troops to occupy the island. Cesare became a gunnery training ship in 1928, after having been in reserve since 1926. She was reconstructed at Cantieri del Tirreno, Genoa, between 1933 and 1937. Both ships participated in a naval review by Adolf Hitler in the Bay of Naples in May 1938 and covered the invasion of Albania in May 1939.", "title": "Construction and service" }, { "paragraph_id": 11, "text": "Early in World War II, the ship took part in the Battle of Calabria (also known as the Battle of Punta Stilo), together with Conte di Cavour, on 9 July 1940, as part of the 1st Battle Squadron, commanded by Admiral Inigo Campioni, during which she engaged major elements of the British Mediterranean Fleet. The British were escorting a convoy from Malta to Alexandria, while the Italians had finished escorting another from Naples to Benghazi, Libya. Admiral Andrew Cunningham, commander of the Mediterranean Fleet, attempted to interpose his ships between the Italians and their base at Taranto. Crew on the fleets spotted each other in the middle of the afternoon and the battleships opened fire at 15:53 at a range of nearly 27,000 meters (29,000 yd). The two leading British battleships, HMS Warspite and Malaya, replied a minute later. Three minutes after she opened fire, shells from Giulio Cesare began to straddle Warspite which made a small turn and increased speed, to throw off the Italian ship's aim, at 16:00. Some rounds fired by Giulio Cesare overshot Warspite and near-missed the destroyers HMS Decoy and Hereward, puncturing their superstructures with splinters. At that same time, a shell from Warspite struck Giulio Cesare at a distance of about 24,000 meters (26,000 yd). The shell pierced the rear funnel and detonated inside it, blowing out a hole nearly 6.1 meters (20 ft) across. Fragments started several fires and their smoke was drawn into the boiler rooms, forcing four boilers off-line as their operators could not breathe. This reduced the ship's speed to 18 knots (33 km/h; 21 mph). Uncertain how severe the damage was, Campioni ordered his battleships to turn away in the face of superior British numbers and they successfully disengaged. Repairs to Giulio Cesare were completed by the end of August and both ships unsuccessfully attempted to intercept British convoys to Malta in August and September.", "title": "Construction and service" }, { "paragraph_id": 12, "text": "On the night of 11 November 1940, Giulio Cesare and the other Italian battleships were at anchor in Taranto harbor when they were attacked by 21 Fairey Swordfish torpedo bombers from the British aircraft carrier HMS Illustrious, along with several other warships. One torpedo sank Conte di Cavour in shallow water, but Giulio Cesare was not hit during the attack. She participated in the Battle of Cape Spartivento on 27 November 1940, but never got close enough to any British ships to fire at them. The ship was damaged in January 1941 by splinters from a near miss during an air raid on Naples by Vickers Wellington bombers of the Royal Air Force; repairs at Genoa were completed in early February. On 8 February, she sailed from to the Straits of Bonifacio to intercept what the Italians thought was a Malta convoy, but was actually a raid on Genoa. She failed to make contact with any British forces. She participated in the First Battle of Sirte on 17 December 1941, providing distant cover for a convoy bound for Libya, and briefly engaging the escort force of a British convoy. She also provided distant cover for another convoy to North Africa in early January 1942. Giulio Cesare was reduced to a training ship afterwards at Taranto and later Pola. After the Italian surrender on 8 September 1943, she steamed to Taranto, putting down a mutiny and enduring an ineffective attack by five German aircraft en route. She then sailed for Malta where she arrived on 12 September to be interned. The ship remained there until 17 June 1944 when she returned to Taranto where she remained for the next four years.", "title": "Construction and service" }, { "paragraph_id": 13, "text": "After the war, Giulio Cesare was allocated to the Soviet Union as part of war reparations. She was moved to Augusta, Sicily, on 9 December 1948, where an unsuccessful attempt was made at sabotage. The ship was stricken from the naval register on 15 December and turned over to the Soviets on 6 February 1949 under the temporary name of Z11 in Vlorë, Albania. She was renamed Novorossiysk, after the Soviet city of that name on the Black Sea. The Soviets used her as a training ship, and gave her eight refits. In 1953, all Italian light AA guns were replaced by eighteen 37 mm 70-K AA guns in six twin mounts and six singles. Also replaced were her fire-control systems and radars. This was intended as a temporary rearmament, as the Soviets drew up plans to replace her secondary 120mm mounts with the 130mm/58 SM-2 that was in development, and the 100mm and 37mm guns with 8 quadruple 45mm. While at anchor in Sevastopol on the night of 28/29 October 1955, an explosion ripped a 4-by-14-meter (13 by 46 ft) hole in the forecastle forward of 'A' turret. The flooding could not be controlled, and she capsized with the loss of 617 men, including 61 men sent from other ships to assist.", "title": "Construction and service" }, { "paragraph_id": 14, "text": "The cause of the explosion is still unclear. The official cause, regarded as the most probable, was a magnetic RMH or LMB bottom mine, laid by the Germans during World War II and triggered by the dragging of the battleship's anchor chain before mooring for the last time. Subsequent searches located 32 mines of these types, some of them within 50 meters (160 ft) of the explosion. The damage was consistent with an explosion of 1,000–1,200 kilograms (2,200–2,600 lb) of TNT, and more than one mine may have detonated. Other explanations for the ship's loss have been proposed, and the most popular of these is that she was sunk by Italian frogmen of the wartime special operations unit Decima Flottiglia MAS who – more than ten years after the cessation of hostilities – were either avenging the transfer of the former Italian battleship to the USSR or sinking it on behalf of NATO. Novorossiysk was stricken from the naval register on 24 February 1956, salvaged on 4 May 1957, and subsequently scrapped.", "title": "Construction and service" }, { "paragraph_id": 15, "text": "44°37′7″N 33°32′8″E / 44.61861°N 33.53556°E / 44.61861; 33.53556", "title": "External links" } ]
Giulio Cesare was one of three Conte di Cavour-class dreadnought battleships built for the Royal Italian Navy in the 1910s. Completed in 1914, she was little used and saw no combat during the First World War. The ship supported operations during the Corfu Incident in 1923 and spent much of the rest of the decade in reserve. She was rebuilt between 1933 and 1937 with more powerful guns, additional armor and considerably more speed than before. During World War II, both Giulio Cesare and her sister ship, Conte di Cavour, participated in the Battle of Calabria in July 1940, when the former was lightly damaged. They were both present when British torpedo bombers attacked the fleet at Taranto in November 1940, but Giulio Cesare was not damaged. She escorted several convoys to North Africa and participated in the Battle of Cape Spartivento in late 1940 and the First Battle of Sirte in late 1941. She was designated as a training ship in early 1942, and escaped to Malta after the Italian armistice the following year. The ship was transferred to the Soviet Union in 1949 and renamed Novorossiysk (Новороссийск). The Soviets also used her for training until she was sunk in 1955, with the loss of 617 men, by an explosion most likely caused by an old German mine. She was salvaged the following year and later scrapped.
2002-02-25T15:51:15Z
2023-10-04T14:22:48Z
[ "Template:Good article", "Template:Infobox ship image", "Template:Sclass", "Template:Main", "Template:Refn", "Template:Cite web", "Template:Conte di Cavour-class battleship", "Template:Short description", "Template:Infobox ship career", "Template:Lang", "Template:Nautical term", "Template:HMS", "Template:ISBN", "Template:Cite book", "Template:Coord", "Template:Infobox ship characteristics", "Template:Ship", "Template:Reflist", "Template:Cite journal", "Template:Commons category", "Template:For", "Template:Infobox ship begin", "Template:Convert", "Template:Cvt", "Template:Portal bar", "Template:1955 shipwrecks" ]
https://en.wikipedia.org/wiki/Italian_battleship_Giulio_Cesare
15,442
INS Vikrant (1961)
INS Vikrant (from Sanskrit vikrānta, "courageous") was a Majestic-class aircraft carrier of the Indian Navy. The ship was laid down as HMS Hercules for the British Royal Navy during World War II, but was put on hold when the war ended. India purchased the incomplete carrier in 1957, and construction was completed in 1961. Vikrant was commissioned as the first aircraft carrier of the Indian Navy and played a key role in enforcing the naval blockade of East Pakistan during the Indo-Pakistani War of 1971. In its later years, the ship underwent major refits to embark modern aircraft, before being decommissioned in January 1997. She was preserved as a museum ship in Naval Docks, Mumbai until 2012. In January 2014, the ship was sold through an online auction and scrapped in November 2014 after final clearance from the Supreme Court. In 1943 the Royal Navy commissioned six light aircraft carriers in an effort to counter the German and Japanese navies. The 1942 Design Light Fleet Carrier, commonly referred to as the British Light Fleet Carrier, was the result. Serving with eight navies between 1944 and 2001, these ships were designed and constructed by civilian shipyards as an intermediate step between the full-sized fleet aircraft carriers and the less expensive but limited-capability escort carriers. Sixteen light fleet carriers were ordered, and all were laid down as what became the Colossus class in 1942 and 1943. The final six ships were modified during construction to handle larger and faster aircraft, and were re-designated the Majestic class. The improvements from the Colossus class to the Majestic class included heavier displacement, armament, catapult, aircraft lifts and aircraft capacity. Construction on the ships was suspended at the end of World War II, as the ships were surplus to the Royal Navy's peacetime requirements. Instead, the carriers were modernized and sold to several Commonwealth nations. The ships were similar, but each varied depending on the requirements of the country to which the ship was sold. HMS Hercules, the fifth ship in the Majestic class, was ordered on 7 August 1942 and laid down on 14 October 1943 by Vickers-Armstrongs at High Walker on the River Tyne. After World War II ended with Japan's surrender on 2 September 1945, she was launched on 22 September, and her construction was suspended in May 1946. At the time of suspension, she was 75 per cent complete. Her hull was preserved, and in May 1947 she was laid up in Gareloch off the Clyde. In January 1957, she was purchased by India and was towed to Belfast to complete her construction and modifications by Harland & Wolff. Several improvements to the original design were ordered by the Indian Navy, including an angled deck, steam catapults, and a modified island. Local "knowledge" in Belfast was that some of the trolley buses being decommissioned at the time left Belfast on Vikrant, to test the new steam catapult on the way to India. Vikrant displaced 16,000 t (15,750 long tons) at standard load and 19,500 t (19,200 long tons) at deep load. She had an overall length of 700 ft (210 m), a beam of 128 ft (39 m) and a mean deep draught of 24 ft (7.3 m). She was powered by a pair of Parsons geared steam turbines, driving two propeller shafts, using steam provided by four Admiralty three-drum boilers. The turbines developed a total of 40,000 indicated horsepower (30,000 kW) which gave a maximum speed of 25 knots (46 km/h; 29 mph). Vikrant carried about 3,175 t (3,125 long tons) of fuel oil that gave her a range of 12,000 nmi (22,000 km; 14,000 mi) at 14 knots (26 km/h; 16 mph), and 6,200 mi (10,000 km) at 23 knots (43 km/h; 26 mph). The air and ship crew comprised 1,110 officers and men. The ship was armed with sixteen 40-millimetre (1.6 in) Bofors anti-aircraft guns, but these were later reduced to eight. At various times, its aircraft consisted of Hawker Sea Hawk and STOVL BAe Sea Harrier jet fighters, Sea King Mk 42B and HAL Chetak helicopters, and Breguet Br.1050 Alizé anti-submarine aircraft. The carrier fielded between 21 and 23 aircraft of all types. Vikrant's flight decks were designed to handle aircraft up to 24,000 lb (11,000 kg), but 20,000 lb (9,100 kg) remained the heaviest landing weight of an aircraft. Larger 54 by 34 feet (16.5 by 10.4 m) lifts were installed. The ship was equipped with one LW-05 air-search radar, one ZW-06 surface-search radar, one LW-10 tactical radar and one Type 963 aircraft landing radar with other communication systems. The Indian Navy's first aircraft carrier was commissioned as INS Vikrant on 4 March 1961 in Belfast by Vijaya Lakshmi Pandit, the Indian High Commissioner to the United Kingdom. The name Vikrant was derived from the Sanskrit word vikrānta meaning "stepping beyond", "courageous" or "bold". Captain Pritam Singh Mahindroo was the first commanding officer of the ship. Two squadrons were to be embarked on the ship - INAS 300, commanded by Lieutenant Commander B. R. Acharya which had British Hawker Sea Hawk fighter-bombers and INAS 310, commanded by Lieutenant Commander Mihir K. Roy which had French Alizé anti-submarine aircraft. On 18 May 1961, the first jet landed on her deck. It was piloted by Lieutenant Radhakrishna Hariram Tahiliani, who later served as admiral and Chief of the Naval Staff of India from 1984 to 1987. Vikrant formally joined the Indian Navy's fleet in Bombay (now Mumbai) on 3 November 1961, when she was received at Ballard Pier by then Prime Minister Jawaharlal Nehru. In December of that year, the ship was deployed for Operation Vijay (the code name for the annexation of Goa) off the coast of Goa with two destroyers, INS Rajput and INS Kirpan. Vikrant did not see action, and patrolled along the coast to deter foreign interference. During the Indo-Pakistani War of 1965, Vikrant was in dry dock refitting, and did not see any action. In June 1970, Vikrant was docked at the Naval Dockyard, Bombay, due to many internal fatigue cracks and fissures in the water drums of her boilers that could not be repaired by welding. As replacement drums were not available locally, four new ones were ordered from Britain, and Naval Headquarters issued orders not to use the boilers until further notice. On 26 February 1971 the ship was moved from Ballard Pier Extension to the anchorage, without replacement drums. The main objective behind this move was to light up the boilers at reduced pressure, and work up the main and flight deck machinery that had been idle for almost seven months. On 1 March, the boilers were ignited, and basin trials up to 40 revolutions per minute (RPM) were conducted. Catapult trials were conducted on the same day. The ship began preliminary sea trials on 18 March and returned two days later. Trials were again conducted on 26–27 April. The navy decided to limit the boilers to a pressure of 400 pounds per square inch (2,800 kPa) and the propeller revolutions to 120 RPM ahead and 80 RPM astern, reducing the ship's speed to 14 knots (26 km/h; 16 mph). With the growing expectations of a war with Pakistan in the near future, the navy started to transfer its ships to strategically advantageous locations in Indian waters. The primary concern of Naval Headquarters about the operation was the serviceability of Vikrant. When asked his opinion regarding the involvement of Vikrant in the war, Fleet Operations Officer Captain Gulab Mohanlal Hiranandani told the Chief of the Naval Staff Admiral Sardarilal Mathradas Nanda: ...during the 1965 war Vikrant was sitting in Bombay Harbour and did not go out to sea. If the same thing happened in 1971, Vikrant would be called a white elephant and naval aviation would be written off. Vikrant had to be seen being operational even if we didn't fly any aircraft. Nanda and Hiranandani proved to be instrumental in taking Vikrant to war. There were objections that the ship might have severe operational difficulties that would expose the carrier to increased danger on operations. In addition, the three Daphne-class submarines acquired by the Pakistan Navy posed a significant risk to the carrier. In June, extensive deep sea trials were carried out, with steel safety harnesses around the three boilers still operational. Observation windows were fitted as a precautionary measure, to detect any steam leaks. By the end of June, the trials were complete and Vikrant was cleared to participate on operations, with its speed restricted to 14 knots. As a part of preparations for the war, Vikrant was assigned to the Eastern Naval Command, then to the Eastern Fleet. This fleet consisted of INS Vikrant, the two Leopard-class frigates INS Brahmaputra and INS Beas, the two Petya III-class corvettes INS Kamorta and INS Kavaratti, and one submarine, INS Khanderi. The main reason behind strengthening the Eastern Fleet was to counter the Pakistani maritime forces deployed in support of military operations in East Bengal. A surveillance area of 18,000 square miles (47,000 km), confined by a triangle with a base of 270 mi (430 km) and sides of 165 mi (266 km) and 225 mi (362 km), was set up in the Bay of Bengal. Any ship in this area was to be challenged and checked. If found to be neutral, it would be escorted to the nearest Indian port, otherwise, it would be captured, and taken as a war prize. In the meantime, intelligence reports confirmed that Pakistan was to deploy a US-built Tench-class submarine, PNS Ghazi. Ghazi was considered as a serious threat to Vikrant by the Indian Navy, as Vikrant's approximate position would be known by the Pakistanis once she started operating aircraft. Of the four available surface ships, INS Kavaratti had no sonar, which meant that the other three had to remain in close vicinity 5–10 mi (8.0–16.1 km) of Vikrant, without which the carrier would be completely vulnerable to attack by Ghazi. On 23 July, Vikrant sailed off to Cochin in company with the Western Fleet. En route, before reaching Cochin on 26 July, Sea King landing trials were carried out. After the completion of the radar and communication trials on 28 July, she departed for Madras, escorted by Brahmaputra and Beas. The next major problem was operating aircraft from the carrier. The commanding officer of the ship, Captain (later Vice Admiral) S. Prakash, was seriously concerned about flight operations. He was concerned that aircrew morale would be adversely affected if flight operations were not undertaken, which could be disastrous. Naval Headquarters remained stubborn on the speed restrictions, and sought confirmation from Prakash whether it was possible to embark an Alizé without compromising the speed restrictions. The speed restrictions imposed by the headquarters meant that Alizé aircraft would have to land at close to stalling speed. Eventually the aircraft weight was reduced, which allowed several of the aircraft to embark, along with a Seahawk squadron. By the end of September, Vikrant and her escorts reached Port Blair. En route to Visakhapatnam, tactical exercises were conducted in the presence of the Flag Officer Commanding-in-Chief of the Eastern Naval Command. From Vishakhapatnam, Vikrant set out for Madras for maintenance. Rear Admiral S. H. Sharma was appointed Flag Officer Commanding Eastern Fleet and arrived at Vishakhapatnam on 14 October. After receiving the reports that Pakistan might launch preemptive strikes, maintenance was stopped for another tactical exercise, which was completed during the night of 26–27 October at Vishakhapatnam. Vikrant then returned to Madras to resume maintenance. On 1 November, the Eastern Fleet was formally constituted, and on 13 November, all the ships set out for the Andaman and Nicobar Islands. To avoid misadventures, it was planned to sail Vikrant to a remote anchorage, isolating it from combat. Simultaneously, deception signals would give the impression that Vikrant was operating somewhere between Madras and Vishakhapatnam. On 23 November, an emergency was declared in Pakistan after a clash of Indian and Pakistani troops in East Pakistan two days earlier. On 2 December, the Eastern Fleet proceeded to its patrol area in anticipation of an attack by Pakistan. The Pakistan Navy had deployed Ghazi on 14 November with the explicit goal of targeting and sinking Vikrant, and Ghazi reached a location near Madras by the 23rd. In an attempt to deceive the Pakistan Navy and Ghazi, India's Naval Headquarters deployed Rajput as a decoy—the ship sailed 160 mi (260 km) off the coast of Vishakhapatnam and broadcast a significant amount of radio traffic, making her appear to be Vikrant. Ghazi, meanwhile, sank off the Visakhapatnam coast under mysterious circumstances. On the night of 3–4 December, a muffled underwater explosion was detected by a coastal battery. The next morning, a local fisherman observed flotsam near the coast, causing Indian naval officials to suspect a vessel had sunk off the coast. The next day, a clearance diving team was sent to search the area, and they confirmed that Ghazi had sunk in shallow waters. The reason for Ghazi's fate is unclear. The Indian Navy's official historian, Hiranandani, suggests three possibilities, after having analysed the position of the rudder and extent of the damage suffered. The first was that Ghazi had come up to periscope depth to identify her position and may have seen an anti-submarine vessel that caused her to crash dive, which in turn may have led her to bury her bow in the bottom. The second possibility is closely related to the first: on the night of the explosion, Rajput was on patrol off Visakhapatnam and observed a severe disturbance in the water. Suspecting that it was a submarine, the ship dropped two depth charges on the spot, on a position that was very close to the wreckage. The third possibility is that there was a mishap when Ghazi was laying mines on the day before hostilities broke out. Vikrant was redeployed towards Chittagong at the outbreak of hostilities. On 4 December, the ship's Sea Hawks struck shipping in Chittagong and Cox's Bazar harbours, sinking or incapacitating most of the ships present. Later strikes targeted Khulna and the Port of Mongla, which continued until 10 December, while other operations were flown to support a naval blockade of East Pakistan. On 14 December, the Sea Hawks attacked the cantonment area in Chittagong, destroying several Pakistani army barracks. Medium anti-aircraft fire was encountered during this strike. Simultaneous attacks by Alizés continued on Cox's Bazar. After this, Vikrant's fuel levels dropped to less than 25 per cent, and the aircraft carrier sailed to Paradip for refueling. The crew of INS Vikrant earned two Maha Vir Chakras and twelve Vir Chakra gallantry medals for their part in the war. Vikrant did not see much service after the war, and was given two major modernisation refits—the first one from 1979 to 1981 and the second one from 1987 to 1989. In the first phase, her boilers, radars, communication systems and anti-aircraft guns were modernised, and facilities to operate Sea Harriers were installed. In the second phase, facilities to operate the new Sea Harrier Vertical/Short Take Off and Land (V/STOL) fighter aircraft and the new Sea King Mk 42B Anti-Submarine Warfare (ASW) helicopters were introduced. A 9.75-degree ski-jump ramp was fitted. The steam catapult was removed during this phase. Again in 1991, Vikrant underwent a six-month refit, followed by another fourteen-month refit in 1992–94. She remained operational thereafter, flying Sea Harriers, Sea Kings and Chetaks until her final sea outing on 23 November 1994. In the same year, a fire was also recorded aboard. In January 1995, the navy decided to keep Vikrant in "safe to float" state. She was laid up and formally decommissioned on 31 January 1997. During her service, INS Vikrant embarked four squadrons of the Naval Air Arm of the Indian Navy: Following decommissioning in 1997, the ship was earmarked for preservation as a museum ship in Mumbai. Lack of funding prevented progress on the ship's conversion to a museum and it was speculated that the ship would be made into a training ship. In 2001, the ship was opened to the public by the Indian Navy, but the Government of Maharashtra was unable to find a partner to operate the museum on a permanent, long-term basis and the museum was closed after it was deemed unsafe for the public in 2012. In August 2013, Vice Admiral Shekhar Sinha, Commander-in-Chief of the Western Naval Command, said the Ministry of Defence would scrap the ship as she had become very difficult to maintain and no private bidders had offered to fund the museum's operations. On 3 December 2013, the Indian government decided to auction the ship. The Bombay High Court dismissed a public-interest lawsuit filed by Kiran Paigankar to stop the auction, stating the vessel's dilapidated condition did not warrant her preservation, nor were the necessary funds or government support available. In January 2014, the ship was sold through an online auction to a Darukhana ship-breaker for ₹60 crore (US$7.5 million). The Supreme Court of India dismissed another lawsuit challenging the ship's sale and scrapping on 14 August 2014. Vikrant remained beached off Darukhana in Mumbai Port while awaiting the final clearances of the Mumbai Port Trust. On 12 November 2014, the Supreme Court gave its final approval for the carrier to be scrapped, which commenced on 22 November 2014. On 7 April 2022, an FIR against an ex-MP Kirit Somaiya, his son Neil, and others was registered, on charges of alleged cheating and criminal breach of trust linked to the collection of funds up to Rs. 57 crore for restoring the decommissioned aircraft carrier INS Vikrant. The Trombay Police booked them under Section 420 (cheating and dishonesty including delivery of property) and Section 406 (punishment for criminal breach of trust) and Section 34 (common intentions) of the Indian Penal Code. According to the complaint, the father and son duo collected the money in 2013-14 in the name of restoring Vikrant, but the funds collected were spent on personal use. Somaiya was leading the front of attacking the government's intent of commercializing the decommissioned ship by handing it to private players. In memory of Vikrant, the Vikrant Memorial was unveiled by Vice Admiral Surinder Pal Singh Cheema, Flag Officer Commanding-in-Chief of the Western Naval Command at K Subash Marg in the Naval Dockyard of Mumbai on 25 January 2016. The memorial is made from metal recovered from the ship. In February 2016, Bajaj unveiled a new motorbike made with metal from Vikrant's scrap and named it Bajaj V in honour of Vikrant. The navy has named its first home-built carrier INS Vikrant in honour of INS Vikrant (R11). The new carrier is built by Cochin Shipyard Limited, and will displace 40,000 t (44,000 short tons). The keel was laid down in February 2009 and she was launched in August 2013. The ship was commissioned on 2 September 2022 by PM Narendra modi The decommissioned ship featured prominently in the film ABCD 2 as a backdrop while it was moored near Darukhana in Mumbai.
[ { "paragraph_id": 0, "text": "INS Vikrant (from Sanskrit vikrānta, \"courageous\") was a Majestic-class aircraft carrier of the Indian Navy. The ship was laid down as HMS Hercules for the British Royal Navy during World War II, but was put on hold when the war ended. India purchased the incomplete carrier in 1957, and construction was completed in 1961. Vikrant was commissioned as the first aircraft carrier of the Indian Navy and played a key role in enforcing the naval blockade of East Pakistan during the Indo-Pakistani War of 1971.", "title": "" }, { "paragraph_id": 1, "text": "In its later years, the ship underwent major refits to embark modern aircraft, before being decommissioned in January 1997. She was preserved as a museum ship in Naval Docks, Mumbai until 2012. In January 2014, the ship was sold through an online auction and scrapped in November 2014 after final clearance from the Supreme Court.", "title": "" }, { "paragraph_id": 2, "text": "In 1943 the Royal Navy commissioned six light aircraft carriers in an effort to counter the German and Japanese navies. The 1942 Design Light Fleet Carrier, commonly referred to as the British Light Fleet Carrier, was the result. Serving with eight navies between 1944 and 2001, these ships were designed and constructed by civilian shipyards as an intermediate step between the full-sized fleet aircraft carriers and the less expensive but limited-capability escort carriers.", "title": "History and construction" }, { "paragraph_id": 3, "text": "Sixteen light fleet carriers were ordered, and all were laid down as what became the Colossus class in 1942 and 1943. The final six ships were modified during construction to handle larger and faster aircraft, and were re-designated the Majestic class. The improvements from the Colossus class to the Majestic class included heavier displacement, armament, catapult, aircraft lifts and aircraft capacity. Construction on the ships was suspended at the end of World War II, as the ships were surplus to the Royal Navy's peacetime requirements. Instead, the carriers were modernized and sold to several Commonwealth nations. The ships were similar, but each varied depending on the requirements of the country to which the ship was sold.", "title": "History and construction" }, { "paragraph_id": 4, "text": "HMS Hercules, the fifth ship in the Majestic class, was ordered on 7 August 1942 and laid down on 14 October 1943 by Vickers-Armstrongs at High Walker on the River Tyne. After World War II ended with Japan's surrender on 2 September 1945, she was launched on 22 September, and her construction was suspended in May 1946. At the time of suspension, she was 75 per cent complete. Her hull was preserved, and in May 1947 she was laid up in Gareloch off the Clyde. In January 1957, she was purchased by India and was towed to Belfast to complete her construction and modifications by Harland & Wolff. Several improvements to the original design were ordered by the Indian Navy, including an angled deck, steam catapults, and a modified island. Local \"knowledge\" in Belfast was that some of the trolley buses being decommissioned at the time left Belfast on Vikrant, to test the new steam catapult on the way to India.", "title": "History and construction" }, { "paragraph_id": 5, "text": "Vikrant displaced 16,000 t (15,750 long tons) at standard load and 19,500 t (19,200 long tons) at deep load. She had an overall length of 700 ft (210 m), a beam of 128 ft (39 m) and a mean deep draught of 24 ft (7.3 m). She was powered by a pair of Parsons geared steam turbines, driving two propeller shafts, using steam provided by four Admiralty three-drum boilers. The turbines developed a total of 40,000 indicated horsepower (30,000 kW) which gave a maximum speed of 25 knots (46 km/h; 29 mph). Vikrant carried about 3,175 t (3,125 long tons) of fuel oil that gave her a range of 12,000 nmi (22,000 km; 14,000 mi) at 14 knots (26 km/h; 16 mph), and 6,200 mi (10,000 km) at 23 knots (43 km/h; 26 mph). The air and ship crew comprised 1,110 officers and men.", "title": "Design and description" }, { "paragraph_id": 6, "text": "The ship was armed with sixteen 40-millimetre (1.6 in) Bofors anti-aircraft guns, but these were later reduced to eight. At various times, its aircraft consisted of Hawker Sea Hawk and STOVL BAe Sea Harrier jet fighters, Sea King Mk 42B and HAL Chetak helicopters, and Breguet Br.1050 Alizé anti-submarine aircraft. The carrier fielded between 21 and 23 aircraft of all types. Vikrant's flight decks were designed to handle aircraft up to 24,000 lb (11,000 kg), but 20,000 lb (9,100 kg) remained the heaviest landing weight of an aircraft. Larger 54 by 34 feet (16.5 by 10.4 m) lifts were installed.", "title": "Design and description" }, { "paragraph_id": 7, "text": "The ship was equipped with one LW-05 air-search radar, one ZW-06 surface-search radar, one LW-10 tactical radar and one Type 963 aircraft landing radar with other communication systems.", "title": "Design and description" }, { "paragraph_id": 8, "text": "The Indian Navy's first aircraft carrier was commissioned as INS Vikrant on 4 March 1961 in Belfast by Vijaya Lakshmi Pandit, the Indian High Commissioner to the United Kingdom. The name Vikrant was derived from the Sanskrit word vikrānta meaning \"stepping beyond\", \"courageous\" or \"bold\". Captain Pritam Singh Mahindroo was the first commanding officer of the ship. Two squadrons were to be embarked on the ship - INAS 300, commanded by Lieutenant Commander B. R. Acharya which had British Hawker Sea Hawk fighter-bombers and INAS 310, commanded by Lieutenant Commander Mihir K. Roy which had French Alizé anti-submarine aircraft. On 18 May 1961, the first jet landed on her deck. It was piloted by Lieutenant Radhakrishna Hariram Tahiliani, who later served as admiral and Chief of the Naval Staff of India from 1984 to 1987. Vikrant formally joined the Indian Navy's fleet in Bombay (now Mumbai) on 3 November 1961, when she was received at Ballard Pier by then Prime Minister Jawaharlal Nehru.", "title": "Service" }, { "paragraph_id": 9, "text": "In December of that year, the ship was deployed for Operation Vijay (the code name for the annexation of Goa) off the coast of Goa with two destroyers, INS Rajput and INS Kirpan. Vikrant did not see action, and patrolled along the coast to deter foreign interference. During the Indo-Pakistani War of 1965, Vikrant was in dry dock refitting, and did not see any action.", "title": "Service" }, { "paragraph_id": 10, "text": "In June 1970, Vikrant was docked at the Naval Dockyard, Bombay, due to many internal fatigue cracks and fissures in the water drums of her boilers that could not be repaired by welding. As replacement drums were not available locally, four new ones were ordered from Britain, and Naval Headquarters issued orders not to use the boilers until further notice. On 26 February 1971 the ship was moved from Ballard Pier Extension to the anchorage, without replacement drums. The main objective behind this move was to light up the boilers at reduced pressure, and work up the main and flight deck machinery that had been idle for almost seven months. On 1 March, the boilers were ignited, and basin trials up to 40 revolutions per minute (RPM) were conducted. Catapult trials were conducted on the same day.", "title": "Service" }, { "paragraph_id": 11, "text": "The ship began preliminary sea trials on 18 March and returned two days later. Trials were again conducted on 26–27 April. The navy decided to limit the boilers to a pressure of 400 pounds per square inch (2,800 kPa) and the propeller revolutions to 120 RPM ahead and 80 RPM astern, reducing the ship's speed to 14 knots (26 km/h; 16 mph). With the growing expectations of a war with Pakistan in the near future, the navy started to transfer its ships to strategically advantageous locations in Indian waters. The primary concern of Naval Headquarters about the operation was the serviceability of Vikrant. When asked his opinion regarding the involvement of Vikrant in the war, Fleet Operations Officer Captain Gulab Mohanlal Hiranandani told the Chief of the Naval Staff Admiral Sardarilal Mathradas Nanda:", "title": "Service" }, { "paragraph_id": 12, "text": "...during the 1965 war Vikrant was sitting in Bombay Harbour and did not go out to sea. If the same thing happened in 1971, Vikrant would be called a white elephant and naval aviation would be written off. Vikrant had to be seen being operational even if we didn't fly any aircraft.", "title": "Service" }, { "paragraph_id": 13, "text": "Nanda and Hiranandani proved to be instrumental in taking Vikrant to war. There were objections that the ship might have severe operational difficulties that would expose the carrier to increased danger on operations. In addition, the three Daphne-class submarines acquired by the Pakistan Navy posed a significant risk to the carrier. In June, extensive deep sea trials were carried out, with steel safety harnesses around the three boilers still operational. Observation windows were fitted as a precautionary measure, to detect any steam leaks. By the end of June, the trials were complete and Vikrant was cleared to participate on operations, with its speed restricted to 14 knots.", "title": "Service" }, { "paragraph_id": 14, "text": "As a part of preparations for the war, Vikrant was assigned to the Eastern Naval Command, then to the Eastern Fleet. This fleet consisted of INS Vikrant, the two Leopard-class frigates INS Brahmaputra and INS Beas, the two Petya III-class corvettes INS Kamorta and INS Kavaratti, and one submarine, INS Khanderi. The main reason behind strengthening the Eastern Fleet was to counter the Pakistani maritime forces deployed in support of military operations in East Bengal. A surveillance area of 18,000 square miles (47,000 km), confined by a triangle with a base of 270 mi (430 km) and sides of 165 mi (266 km) and 225 mi (362 km), was set up in the Bay of Bengal. Any ship in this area was to be challenged and checked. If found to be neutral, it would be escorted to the nearest Indian port, otherwise, it would be captured, and taken as a war prize.", "title": "Service" }, { "paragraph_id": 15, "text": "In the meantime, intelligence reports confirmed that Pakistan was to deploy a US-built Tench-class submarine, PNS Ghazi. Ghazi was considered as a serious threat to Vikrant by the Indian Navy, as Vikrant's approximate position would be known by the Pakistanis once she started operating aircraft. Of the four available surface ships, INS Kavaratti had no sonar, which meant that the other three had to remain in close vicinity 5–10 mi (8.0–16.1 km) of Vikrant, without which the carrier would be completely vulnerable to attack by Ghazi.", "title": "Service" }, { "paragraph_id": 16, "text": "On 23 July, Vikrant sailed off to Cochin in company with the Western Fleet. En route, before reaching Cochin on 26 July, Sea King landing trials were carried out. After the completion of the radar and communication trials on 28 July, she departed for Madras, escorted by Brahmaputra and Beas. The next major problem was operating aircraft from the carrier. The commanding officer of the ship, Captain (later Vice Admiral) S. Prakash, was seriously concerned about flight operations. He was concerned that aircrew morale would be adversely affected if flight operations were not undertaken, which could be disastrous. Naval Headquarters remained stubborn on the speed restrictions, and sought confirmation from Prakash whether it was possible to embark an Alizé without compromising the speed restrictions. The speed restrictions imposed by the headquarters meant that Alizé aircraft would have to land at close to stalling speed. Eventually the aircraft weight was reduced, which allowed several of the aircraft to embark, along with a Seahawk squadron.", "title": "Service" }, { "paragraph_id": 17, "text": "By the end of September, Vikrant and her escorts reached Port Blair. En route to Visakhapatnam, tactical exercises were conducted in the presence of the Flag Officer Commanding-in-Chief of the Eastern Naval Command. From Vishakhapatnam, Vikrant set out for Madras for maintenance. Rear Admiral S. H. Sharma was appointed Flag Officer Commanding Eastern Fleet and arrived at Vishakhapatnam on 14 October. After receiving the reports that Pakistan might launch preemptive strikes, maintenance was stopped for another tactical exercise, which was completed during the night of 26–27 October at Vishakhapatnam. Vikrant then returned to Madras to resume maintenance. On 1 November, the Eastern Fleet was formally constituted, and on 13 November, all the ships set out for the Andaman and Nicobar Islands. To avoid misadventures, it was planned to sail Vikrant to a remote anchorage, isolating it from combat. Simultaneously, deception signals would give the impression that Vikrant was operating somewhere between Madras and Vishakhapatnam.", "title": "Service" }, { "paragraph_id": 18, "text": "On 23 November, an emergency was declared in Pakistan after a clash of Indian and Pakistani troops in East Pakistan two days earlier. On 2 December, the Eastern Fleet proceeded to its patrol area in anticipation of an attack by Pakistan. The Pakistan Navy had deployed Ghazi on 14 November with the explicit goal of targeting and sinking Vikrant, and Ghazi reached a location near Madras by the 23rd. In an attempt to deceive the Pakistan Navy and Ghazi, India's Naval Headquarters deployed Rajput as a decoy—the ship sailed 160 mi (260 km) off the coast of Vishakhapatnam and broadcast a significant amount of radio traffic, making her appear to be Vikrant.", "title": "Service" }, { "paragraph_id": 19, "text": "Ghazi, meanwhile, sank off the Visakhapatnam coast under mysterious circumstances. On the night of 3–4 December, a muffled underwater explosion was detected by a coastal battery. The next morning, a local fisherman observed flotsam near the coast, causing Indian naval officials to suspect a vessel had sunk off the coast. The next day, a clearance diving team was sent to search the area, and they confirmed that Ghazi had sunk in shallow waters.", "title": "Service" }, { "paragraph_id": 20, "text": "The reason for Ghazi's fate is unclear. The Indian Navy's official historian, Hiranandani, suggests three possibilities, after having analysed the position of the rudder and extent of the damage suffered. The first was that Ghazi had come up to periscope depth to identify her position and may have seen an anti-submarine vessel that caused her to crash dive, which in turn may have led her to bury her bow in the bottom. The second possibility is closely related to the first: on the night of the explosion, Rajput was on patrol off Visakhapatnam and observed a severe disturbance in the water. Suspecting that it was a submarine, the ship dropped two depth charges on the spot, on a position that was very close to the wreckage. The third possibility is that there was a mishap when Ghazi was laying mines on the day before hostilities broke out.", "title": "Service" }, { "paragraph_id": 21, "text": "Vikrant was redeployed towards Chittagong at the outbreak of hostilities. On 4 December, the ship's Sea Hawks struck shipping in Chittagong and Cox's Bazar harbours, sinking or incapacitating most of the ships present. Later strikes targeted Khulna and the Port of Mongla, which continued until 10 December, while other operations were flown to support a naval blockade of East Pakistan. On 14 December, the Sea Hawks attacked the cantonment area in Chittagong, destroying several Pakistani army barracks. Medium anti-aircraft fire was encountered during this strike. Simultaneous attacks by Alizés continued on Cox's Bazar. After this, Vikrant's fuel levels dropped to less than 25 per cent, and the aircraft carrier sailed to Paradip for refueling. The crew of INS Vikrant earned two Maha Vir Chakras and twelve Vir Chakra gallantry medals for their part in the war.", "title": "Service" }, { "paragraph_id": 22, "text": "Vikrant did not see much service after the war, and was given two major modernisation refits—the first one from 1979 to 1981 and the second one from 1987 to 1989. In the first phase, her boilers, radars, communication systems and anti-aircraft guns were modernised, and facilities to operate Sea Harriers were installed. In the second phase, facilities to operate the new Sea Harrier Vertical/Short Take Off and Land (V/STOL) fighter aircraft and the new Sea King Mk 42B Anti-Submarine Warfare (ASW) helicopters were introduced. A 9.75-degree ski-jump ramp was fitted. The steam catapult was removed during this phase. Again in 1991, Vikrant underwent a six-month refit, followed by another fourteen-month refit in 1992–94. She remained operational thereafter, flying Sea Harriers, Sea Kings and Chetaks until her final sea outing on 23 November 1994. In the same year, a fire was also recorded aboard. In January 1995, the navy decided to keep Vikrant in \"safe to float\" state. She was laid up and formally decommissioned on 31 January 1997.", "title": "Service" }, { "paragraph_id": 23, "text": "During her service, INS Vikrant embarked four squadrons of the Naval Air Arm of the Indian Navy:", "title": "Service" }, { "paragraph_id": 24, "text": "Following decommissioning in 1997, the ship was earmarked for preservation as a museum ship in Mumbai. Lack of funding prevented progress on the ship's conversion to a museum and it was speculated that the ship would be made into a training ship. In 2001, the ship was opened to the public by the Indian Navy, but the Government of Maharashtra was unable to find a partner to operate the museum on a permanent, long-term basis and the museum was closed after it was deemed unsafe for the public in 2012.", "title": "Museum ship" }, { "paragraph_id": 25, "text": "In August 2013, Vice Admiral Shekhar Sinha, Commander-in-Chief of the Western Naval Command, said the Ministry of Defence would scrap the ship as she had become very difficult to maintain and no private bidders had offered to fund the museum's operations. On 3 December 2013, the Indian government decided to auction the ship. The Bombay High Court dismissed a public-interest lawsuit filed by Kiran Paigankar to stop the auction, stating the vessel's dilapidated condition did not warrant her preservation, nor were the necessary funds or government support available.", "title": "Scrapping" }, { "paragraph_id": 26, "text": "In January 2014, the ship was sold through an online auction to a Darukhana ship-breaker for ₹60 crore (US$7.5 million). The Supreme Court of India dismissed another lawsuit challenging the ship's sale and scrapping on 14 August 2014. Vikrant remained beached off Darukhana in Mumbai Port while awaiting the final clearances of the Mumbai Port Trust. On 12 November 2014, the Supreme Court gave its final approval for the carrier to be scrapped, which commenced on 22 November 2014.", "title": "Scrapping" }, { "paragraph_id": 27, "text": "On 7 April 2022, an FIR against an ex-MP Kirit Somaiya, his son Neil, and others was registered, on charges of alleged cheating and criminal breach of trust linked to the collection of funds up to Rs. 57 crore for restoring the decommissioned aircraft carrier INS Vikrant. The Trombay Police booked them under Section 420 (cheating and dishonesty including delivery of property) and Section 406 (punishment for criminal breach of trust) and Section 34 (common intentions) of the Indian Penal Code.", "title": "Scrapping" }, { "paragraph_id": 28, "text": "According to the complaint, the father and son duo collected the money in 2013-14 in the name of restoring Vikrant, but the funds collected were spent on personal use.", "title": "Scrapping" }, { "paragraph_id": 29, "text": "Somaiya was leading the front of attacking the government's intent of commercializing the decommissioned ship by handing it to private players.", "title": "Scrapping" }, { "paragraph_id": 30, "text": "In memory of Vikrant, the Vikrant Memorial was unveiled by Vice Admiral Surinder Pal Singh Cheema, Flag Officer Commanding-in-Chief of the Western Naval Command at K Subash Marg in the Naval Dockyard of Mumbai on 25 January 2016. The memorial is made from metal recovered from the ship. In February 2016, Bajaj unveiled a new motorbike made with metal from Vikrant's scrap and named it Bajaj V in honour of Vikrant.", "title": "Legacy" }, { "paragraph_id": 31, "text": "The navy has named its first home-built carrier INS Vikrant in honour of INS Vikrant (R11). The new carrier is built by Cochin Shipyard Limited, and will displace 40,000 t (44,000 short tons). The keel was laid down in February 2009 and she was launched in August 2013. The ship was commissioned on 2 September 2022 by PM Narendra modi", "title": "Legacy" }, { "paragraph_id": 32, "text": "The decommissioned ship featured prominently in the film ABCD 2 as a backdrop while it was moored near Darukhana in Mumbai.", "title": "In popular culture" } ]
INS Vikrant was a Majestic-class aircraft carrier of the Indian Navy. The ship was laid down as HMS Hercules for the British Royal Navy during World War II, but was put on hold when the war ended. India purchased the incomplete carrier in 1957, and construction was completed in 1961. Vikrant was commissioned as the first aircraft carrier of the Indian Navy and played a key role in enforcing the naval blockade of East Pakistan during the Indo-Pakistani War of 1971. In its later years, the ship underwent major refits to embark modern aircraft, before being decommissioned in January 1997. She was preserved as a museum ship in Naval Docks, Mumbai until 2012. In January 2014, the ship was sold through an online auction and scrapped in November 2014 after final clearance from the Supreme Court.
2001-12-23T15:57:15Z
2023-12-29T15:41:57Z
[ "Template:Short description", "Template:Redirect", "Template:Refend", "Template:Majestic-class aircraft carriers", "Template:Cite news", "Template:Other ships", "Template:Use dmy dates", "Template:Efn", "Template:Main", "Template:Ship", "Template:Convert", "Template:'", "Template:Refbegin", "Template:Indian Navy aircraft carriers", "Template:INS", "Template:Blockquote", "Template:Reflist", "Template:Infobox ship begin", "Template:See also", "Template:Commons category", "Template:Ship classes of the Indian Navy", "Template:Sfn", "Template:Citation needed", "Template:Citation", "Template:Small", "Template:INRconvert", "Template:Cite web", "Template:Featured article", "Template:Infobox ship image", "Template:Infobox ship career", "Template:Infobox ship characteristics", "Template:Sclass" ]
https://en.wikipedia.org/wiki/INS_Vikrant_(1961)
15,443
Western imperialism in Asia
The influence and imperialism of Western Europe and associated states (such as Russia, Japan, and the United States) peaked in Asian territories from the colonial period beginning in the 16th century and substantially reducing with 20th century decolonization. It originated in the 15th-century search for trade routes to the Indian subcontinent and Southeast Asia that led directly to the Age of Discovery, and additionally the introduction of early modern warfare into what Europeans first called the East Indies and later the Far East. By the early 16th century, the Age of Sail greatly expanded Western European influence and development of the spice trade under colonialism. European-style colonial empires and imperialism operated in Asia throughout six centuries of colonialism, formally ending with the independence of the Portuguese Empire's last colony Macau in 1999. The empires introduced Western concepts of nation and the multinational state. This article attempts to outline the consequent development of the Western concept of the nation state. European political power, commerce, and culture in Asia gave rise to growing trade in commodities—a key development in the rise of today's modern world free market economy. In the 16th century, the Portuguese broke the (overland) monopoly of the Arabs and Italians in trade between Asia and Europe by the discovery of the sea route to India around the Cape of Good Hope. The ensuing rise of the rival Dutch East India Company gradually eclipsed Portuguese influence in Asia. Dutch forces first established independent bases in the East (most significantly Batavia, the heavily fortified headquarters of the Dutch East India Company) and then between 1640 and 1660 wrested Malacca, Ceylon, some southern Indian ports, and the lucrative Japan trade from the Portuguese. Later, the English and the French established settlements in India and trade with China and their acquisitions would gradually surpass those of the Dutch. Following the end of the Seven Years' War in 1763, the British eliminated French influence in India and established the British East India Company (founded in 1600) as the most important political force on the Indian subcontinent. Before the Industrial Revolution in the mid-to-late 19th century, demand for oriental goods such as porcelain, silk, spices, and tea remained the driving force behind European imperialism. The Western European stake in Asia remained confined largely to trading stations and strategic outposts necessary to protect trade. Industrialization, however, dramatically increased European demand for Asian raw materials; with the severe Long Depression of the 1870s provoking a scramble for new markets for European industrial products and financial services in Africa, the Americas, Eastern Europe, and especially in Asia. This scramble coincided with a new era in global colonial expansion known as "the New Imperialism", which saw a shift in focus from trade and indirect rule to formal colonial control of vast overseas territories ruled as political extensions of their mother countries. Between the 1870s and the beginning of World War I in 1914, the United Kingdom, France, and the Netherlands—the established colonial powers in Asia—added to their empires vast expanses of territory in the Middle East, the Indian Subcontinent, and Southeast Asia. In the same period, the Empire of Japan, following the Meiji Restoration; the German Empire, following the end of the Franco-Prussian War in 1871; Tsarist Russia; and the United States, following the Spanish–American War in 1898, quickly emerged as new imperial powers in East Asia and in the Pacific Ocean area. In Asia, World War I and World War II were played out as struggles among several key imperial power, with conflicts involving the European powers along with Russia and the rising American and Japanese. None of the colonial powers, however, possessed the resources to withstand the strains of both World Wars and maintain their direct rule in Asia. Although nationalist movements throughout the colonial world led to the political independence of nearly all of Asia's remaining colonies, decolonization was intercepted by the Cold War. South East Asia, South Asia, the Middle East, and East Asia remained embedded in a world economic, financial, and military system in which the great powers compete to extend their influence. However, the rapid post-war economic development and rise of the industrialized developed countries of Taiwan, Singapore, South Korea, Japan and the developing countries of India, the People's Republic of China and its autonomous territory of Hong Kong, along with the collapse of the Soviet Union, have greatly diminished Western European influence in Asia. The United States remains influential with trade and military bases in Asia. European exploration of Asia started in ancient Roman times along the Silk Road. The Romans had knowledge of lands as distant as China. Trade with India through the Roman Egyptian Red Sea ports was significant in the first centuries of the Common Era. In the 13th and 14th centuries, a number of Europeans, many of them Christian missionaries, had sought to penetrate into China. The most famous of these travelers was Marco Polo. But these journeys had little permanent effect on east–west trade because of a series of political developments in Asia in the last decades of the 14th century, which put an end to further European exploration of Asia. The Yuan dynasty in China, which had been receptive to European missionaries and merchants, was overthrown, and the new Ming rulers were found to be unreceptive of religious proselytism. Meanwhile, the Turks consolidated control over the eastern Mediterranean, closing off key overland trade routes. Thus, until the 15th century, only minor trade and cultural exchanges between Europe and Asia continued at certain terminals controlled by Muslim traders. Western European rulers determined to find new trade routes of their own. The Portuguese spearheaded the drive to find oceanic routes that would provide cheaper and easier access to South and East Asian goods. This chartering of oceanic routes between East and West began with the unprecedented voyages of Portuguese and Spanish sea captains. Their voyages were influenced by medieval European adventurers, who had journeyed overland to the Far East and contributed to geographical knowledge of parts of Asia upon their return. In 1488, Bartolomeu Dias rounded the southern tip of Africa under the sponsorship of Portugal's John II, from which point he noticed that the coast swung northeast (Cape of Good Hope). While Dias' crew forced him to turn back, by 1497, Portuguese navigator Vasco da Gama made the first open voyage from Europe to India. In 1520, Ferdinand Magellan, a Portuguese navigator in the service of the Crown of Castile ('Spain'), found a sea route into the Pacific Ocean. In 1509, the Portuguese under Francisco de Almeida won the decisive battle of Diu against a joint Mamluk and Arab fleet sent to expel the Portuguese of the Arabian Sea. The victory enabled Portugal to implement its strategy of controlling the Indian Ocean. Early in the 16th century, Afonso de Albuquerque emerged as the Portuguese colonial viceroy most instrumental in consolidating Portugal's holdings in Africa and in Asia. He understood that Portugal could wrest commercial supremacy from the Arabs only by force, and therefore devised a plan to establish forts at strategic sites which would dominate the trade routes and also protect Portuguese interests on land. In 1510, he conquered Goa in India, which enabled him to gradually consolidate control of most of the commercial traffic between Europe and Asia, largely through trade; Europeans started to carry on trade from forts, acting as foreign merchants rather than as settlers. In contrast, early European expansion in the "West Indies", (later known to Europeans as a separate continent from Asia that they would call the "Americas") following the 1492 voyage of Christopher Columbus, involved heavy settlement in colonies that were treated as political extensions of the mother countries. Lured by the potential of high profits from another expedition, the Portuguese established a permanent base in Cochin, south of the Indian trade port of Calicut in the early 16th century. In 1510, the Portuguese, led by Afonso de Albuquerque, seized Goa on the coast of India, which Portugal held until 1961, along with Diu and Daman (the remaining territory and enclaves in India from a former network of coastal towns and smaller fortified trading ports added and abandoned or lost centuries before). The Portuguese soon acquired a monopoly over trade in the Indian Ocean. Portuguese viceroy Albuquerque (1509–1515) resolved to consolidate Portuguese holdings in Africa and Asia, and secure control of trade with the East Indies and China. His first objective was Malacca, which controlled the narrow strait through which most Far Eastern trade moved. Captured in 1511, Malacca became the springboard for further eastward penetration, starting with the voyage of António de Abreu and Francisco Serrão in 1512, ordered by Albuquerque, to the Moluccas. Years later the first trading posts were established in the Moluccas, or "Spice Islands", which was the source for some of the world's most hotly demanded spices, and from there, in Makassar and some others, but smaller, in the Lesser Sunda Islands. By 1513–1516, the first Portuguese ships had reached Canton on the southern coasts of China. In 1513, after the failed attempt to conquer Aden, Albuquerque entered with an armada, for the first time for Europeans by the ocean via, on the Red Sea; and in 1515, Albuquerque consolidated the Portuguese hegemony in the Persian Gulf gates, already begun by him in 1507, with the domain of Muscat and Ormuz. Shortly after, other fortified bases and forts were annexed and built along the Gulf, and in 1521, through a military campaign, the Portuguese annexed Bahrain. The Portuguese conquest of Malacca triggered the Malayan–Portuguese war. In 1521, Ming dynasty China defeated the Portuguese at the Battle of Tunmen and then defeated the Portuguese again at the Battle of Xicaowan. The Portuguese tried to establish trade with China by illegally smuggling with the pirates on the offshore islands off the coast of Zhejiang and Fujian, but they were driven away by the Ming navy in the 1530s-1540s. In 1557, China decided to lease Macau to the Portuguese as a place where they could dry goods they transported on their ships, which they held until 1999. The Portuguese, based at Goa and Malacca, had now established a lucrative maritime empire in the Indian Ocean meant to monopolize the spice trade. The Portuguese also began a channel of trade with the Japanese, becoming the first recorded Westerners to have visited Japan. This contact introduced Christianity and firearms into Japan. In 1505, (also possibly before, in 1501), the Portuguese, through Lourenço de Almeida, the son of Francisco de Almeida, reached Ceylon. The Portuguese founded a fort at the city of Colombo in 1517 and gradually extended their control over the coastal areas and inland. In a series of military conflicts and political maneuvers, the Portuguese extended their control over the Sinhalese kingdoms, including Jaffna (1591), Raigama (1593), Sitawaka (1593), and Kotte (1594)- However, the aim of unifying the entire island under Portuguese control faced the Kingdom of Kandy`s fierce resistance. The Portuguese, led by Pedro Lopes de Sousa, launched a full-scale military invasion of the kingdom of Kandy in the Campaign of Danture of 1594. The invasion was a disaster for the Portuguese, with their entire army wiped out by Kandyan guerrilla warfare. Constantino de Sá, romantically celebrated in the 17th century Sinhalese Epic (also for its greater humanism and tolerance compared to other governors) led the last military operation that also ended in disaster. He died in the Battle of Randeniwela, refusing to abandon his troops in the face of total annihilation. The energies of Castile (later, the unified Spain), the other major colonial power of the 16th century, were largely concentrated on the Americas, not South and East Asia, but the Spanish did establish a footing in the Far East in the Philippines. After fighting with the Portuguese by the Spice Islands since 1522 and the agreement between the two powers in 1529 (in the treaty of Zaragoza), the Spanish, led by Miguel López de Legazpi, settled and conquered gradually the Philippines since 1564. After the discovery of the return voyage to the Americas by Andres de Urdaneta in 1565, cargoes of Chinese goods were transported from the Philippines to Mexico and from there to Spain. By this long route, Spain reaped some of the profits of Far Eastern commerce. Spanish officials converted the islands to Christianity and established some settlements, permanently establishing the Philippines as the area of East Asia most oriented toward the West in terms of culture and commerce. The Moro Muslims fought against the Spanish for over three centuries in the Spanish–Moro conflict. The lucrative trade was vastly expanded when the Portuguese began to export slaves from Africa in 1541; however, over time, the rise of the slave trade left Portugal over-extended, and vulnerable to competition from other Western European powers. Envious of Portugal's control of trade routes, other Western European nations—mainly the Netherlands, France, and England—began to send in rival expeditions to Asia. In 1642, the Dutch drove the Portuguese out of the Gold Coast in Africa, the source of the bulk of Portuguese slave laborers, leaving this rich slaving area to other Europeans, especially the Dutch and the English. Rival European powers began to make inroads in Asia as the Portuguese and Spanish trade in the Indian Ocean declined primarily because they had become hugely over-stretched financially due to the limitations on their investment capacity and contemporary naval technology. Both of these factors worked in tandem, making control over Indian Ocean trade extremely expensive. The existing Portuguese interests in Asia proved sufficient to finance further colonial expansion and entrenchment in areas regarded as of greater strategic importance in Africa and Brazil. Portuguese maritime supremacy was lost to the Dutch in the 17th century, and with this came serious challenges for the Portuguese. However, they still clung to Macau and settled a new colony on the island of Timor. It was as recent as the 1960s and 1970s that the Portuguese began to relinquish their colonies in Asia. Goa was invaded by India in 1961 and became an Indian state in 1987; Portuguese Timor was abandoned in 1975 and was then invaded by Indonesia. It became an independent country in 2002, and Macau was handed back to the Chinese as per a treaty in 1999. The arrival of the Portuguese and Spanish and their holy wars against Muslim states in the Malayan–Portuguese war, Spanish–Moro conflict and Castilian War inflamed religious tensions and turned Southeast Asia into an arena of conflict between Muslims and Christians. The Brunei Sultanate's capital at Kota Batu was assaulted by Governor Sande who led the 1578 Spanish attack. The word "savages" in Spanish, cafres, was from the word "infidel" in Arabic - Kafir, and was used by the Spanish to refer to their own "Christian savages" who were arrested in Brunei. It was said Castilians are kafir, men who have no souls, who are condemned by fire when they die, and that too because they eat pork by the Brunei Sultan after the term accursed doctrine was used to attack Islam by the Spaniards which fed into hatred between Muslims and Christians sparked by their 1571 war against Brunei. The Sultan's words were in response to insults coming from the Spanish at Manila in 1578, other Muslims from Champa, Java, Borneo, Luzon, Pahang, Demak, Aceh, and the Malays echoed the rhetoric of holy war against the Spanish and Iberian Portuguese, calling them kafir enemies which was a contrast to their earlier nuanced views of the Portuguese in the Hikayat Tanah Hitu and Sejarah Melayu. The war by Spain against Brunei was defended in an apologia written by Doctor De Sande. The British eventually partitioned and took over Brunei while Sulu was attacked by the British, Americans, and Spanish which caused its breakdown and downfall after both of them thrived from 1500 to 1900 for four centuries. Dar al-Islam was seen as under invasion by "kafirs" by the Atjehnese led by Zayn al-din and by Muslims in the Philippines as they saw the Spanish invasion, since the Spanish brought the idea of a crusader holy war against Muslim Moros just as the Portuguese did in Indonesia and India against what they called "Moors" in their political and commercial conquests which they saw through the lens of religion in the 16th century. In 1578, an attack was launched by the Spanish against Jolo, and in 1875 it was destroyed at their hands, and once again in 1974 it was destroyed by the Philippines. The Spanish first set foot on Borneo in Brunei. The Spanish war against Brunei failed to conquer Brunei but it totally cut off the Philippines from Brunei's influence, the Spanish then started colonizing Mindanao and building fortresses. In response, the Bisayas, where Spanish forces were stationed, were subjected to retaliatory attacks by the Magindanao in 1599-1600 due to the Spanish attacks on Mindanao. The Brunei royal family was related to the Muslim Rajahs who in ruled the principality in 1570 of Manila (Kingdom of Maynila) and this was what the Spaniards came across on their initial arrival to Manila, Spain uprooted Islam out of areas where it was shallow after they began to force Christianity on the Philippines in their conquests after 1521 while Islam was already widespread in the 16th century Philippines. In the Philippines in the Cebu islands the natives killed the Spanish fleet leader Magellan. Borneo's western coastal areas at Landak, Sukadana, and Sambas saw the growth of Muslim states in the sixteenth century, in the 15th century at Nanking, the capital of China, the death and burial of the Borneo Bruneian king Maharaja Kama took place upon his visit to China with Zheng He's fleet. The Spanish were expelled from Brunei in 1579 after they attacked in 1578. There were fifty thousand inhabitants before the 1597 attack by the Spanish in Brunei. During first contact with China, numerous aggressions and provocations were undertaken by the Portuguese They believed they could mistreat the non-Christians because they themselves were Christians and acted in the name of their religion in committing crimes and atrocities. This resulted in the Battle of Xicaowan where the local Chinese navy defeated and captured a fleet of Portuguese caravels. The Portuguese decline in Asia was accelerated by attacks on their commercial empire by the Dutch and the English, which began a global struggle over the empire in Asia that lasted until the end of the Seven Years' War in 1763. The Netherlands revolt against Spanish rule facilitated Dutch encroachment on the Portuguese monopoly over South and East Asian trade. The Dutch looked on Spain's trade and colonies as potential spoils of war. When the two crowns of the Iberian peninsula were joined in 1581, the Dutch felt free to attack Portuguese territories in Asia. By the 1590s, a number of Dutch companies were formed to finance trading expeditions in Asia. Because competition lowered their profits, and because of the doctrines of mercantilism, in 1602 the companies united into a cartel and formed the Dutch East India Company, and received from the government the right to trade and colonize territory in the area stretching from the Cape of Good Hope eastward to the Strait of Magellan. In 1605, armed Dutch merchants captured the Portuguese fort at Amboyna in the Moluccas, which was developed into the company's first secure base. Over time, the Dutch gradually consolidated control over the great trading ports of the East Indies. This control allowed the company to monopolise the world spice trade for decades. Their monopoly over the spice trade became complete after they drove the Portuguese from Malacca in 1641 and Ceylon in 1658. Dutch East India Company colonies or outposts were later established in Atjeh (Aceh), 1667; Macassar, 1669; and Bantam, 1682. The company established its headquarters at Batavia (today Jakarta) on the island of Java. Outside the East Indies, the Dutch East India Company colonies or outposts were also established in Persia (Iran), Bengal (now Bangladesh and part of India), Mauritius (1638-1658/1664-1710), Siam (now Thailand), Guangzhou (Canton, China), Taiwan (1624–1662), and southern India (1616–1795). Ming dynasty China defeated the Dutch East India Company in the Sino-Dutch conflicts. The Chinese first defeated and drove the Dutch out of the Pescadores in 1624. The Ming navy under Zheng Zhilong defeated the Dutch East India Company's fleet at the 1633 Battle of Liaoluo Bay. In 1662, Zheng Zhilong's son Zheng Chenggong (also known as Koxinga) expelled the Dutch from Taiwan after defeating them in the siege of Fort Zeelandia. (see History of Taiwan) Further, the Dutch East India Company trade post on Dejima (1641–1857), an artificial island off the coast of Nagasaki, was for a long time the only place where Europeans could trade with Japan. The Vietnamese Nguyễn lords defeated the Dutch in a naval battle in 1643. The Cambodians defeated the Dutch in the Cambodian–Dutch War in 1644. In 1652, Jan van Riebeeck established an outpost at the Cape of Good Hope (the southwestern tip of Africa, currently in South Africa) to restock company ships on their journey to East Asia. This post later became a fully-fledged colony, the Cape Colony (1652–1806). As Cape Colony attracted increasing Dutch and European settlement, the Dutch founded the city of Kaapstad (Cape Town). By 1669, the Dutch East India Company was the richest private company in history, with a huge fleet of merchant ships and warships, tens of thousands of employees, a private army consisting of thousands of soldiers, and a reputation on the part of its stockholders for high dividend payments. The company was in almost constant conflict with the English; relations were particularly tense following the Amboyna Massacre in 1623. During the 18th century, Dutch East India Company possessions were increasingly focused on the East Indies. After the fourth war between the United Provinces and England (1780–1784), the company suffered increasing financial difficulties. In 1799, the company was dissolved, commencing official colonisation of the East Indies. During the era of New Imperialism the territorial claims of the Dutch East India Company (VOC) expanded into a fully fledged colony named the Dutch East Indies. Partly driven by re-newed colonial aspirations of fellow European nation states the Dutch strived to establish unchallenged control of the archipelago now known as Indonesia. Six years into formal colonisation of the East Indies, in Europe the Dutch Republic was occupied by the French forces of Napoleon. The Dutch government went into exile in England and formally ceded its colonial possessions to Great Britain. The pro-French Governor General of Java Jan Willem Janssens, resisted a British invasion force in 1811 until forced to surrender. British Governor Raffles, who the later founded the city of Singapore, ruled the colony the following 10 years of the British interregnum (1806–1816). After the defeat of Napoleon and the Anglo-Dutch Treaty of 1814 colonial government of the East Indies was ceded back to the Dutch in 1817. The loss of South Africa and the continued scramble for Africa stimulated the Dutch to secure unchallenged dominion over its colony in the East Indies. The Dutch started to consolidate its power base through extensive military campaigns and elaborate diplomatic alliances with indigenous rulers ensuring the Dutch tricolor was firmly planted in all corners of the Archipelago. These military campaigns included: the Padri War (1821–1837), the Java War (1825–1830) and the Aceh War (1873–1904). This raised the need for a considerable military buildup of the colonial army (KNIL). From all over Europe soldiers were recruited to join the KNIL. The Dutch concentrated their colonial enterprise in the Dutch East Indies (Indonesia) throughout the 19th century. The Dutch lost control over the East Indies to the Japanese during much of World War II. Following the war, the Dutch fought Indonesian independence forces after Japan surrendered to the Allies in 1945. In 1949, most of what was known as the Dutch East Indies was ceded to the independent Republic of Indonesia. In 1962, also Dutch New Guinea was annexed by Indonesia de facto ending Dutch imperialism in Asia. The English sought to stake out claims in India at the expense of the Portuguese dating back to the Elizabethan era. In 1600, Queen Elizabeth I incorporated the English East India Company (later the British East India Company), granting it a monopoly of trade from the Cape of Good Hope eastward to the Strait of Magellan. In 1639, it acquired Madras on the east coast of India, where it quickly surpassed Portuguese Goa as the principal European trading Centre on the Indian Subcontinent. Through bribes, diplomacy, and manipulation of weak native rulers, the company prospered in India, where it became the most powerful political force, and outrivaled its Portuguese and French competitors. For more than one hundred years, English and French trading companies had fought one another for supremacy, and, by the middle of the 18th century, competition between the British and the French had heated up. French defeat by the British under the command of Robert Clive during the Seven Years' War (1756–1763) marked the end of the French stake in India. The British East India Company, although still in direct competition with French and Dutch interests until 1763, following the subjugation of Bengal at the 1757 Battle of Plassey. The British East India Company made great advances at the expense of the Mughal Empire. The reign of Aurangzeb had marked the height of Mughal power. By 1690 Mughal territorial expansion reached its greatest extent encompassing the entire Indian Subcontinent. But this period of power was followed by one of decline. Fifty years after the death of Aurangzeb, the great Mughal empire had crumbled. Meanwhile, marauding warlords, nobles, and others bent on gaining power left the Subcontinent increasingly anarchic. Although the Mughals kept the imperial title until 1858, the central government had collapsed, creating a power vacuum. Aside from defeating the French during the Seven Years' War, Robert Clive, the leader of the East India Company in India, defeated Siraj ud-Daulah, a key Indian ruler of Bengal, at the decisive Battle of Plassey (1757), a victory that ushered in the beginning of a new period in Indian history, that of informal British rule. While still nominally the sovereign. The transition to formal imperialism, characterized by Queen Victoria being crowned "Empress of India" in the 1870s, was a gradual process. The first step toward cementing formal British control extended back to the late 18th century. The British Parliament, disturbed by the idea that a great business concern, interested primarily in profit, was controlling the destinies of millions of people, passed acts in 1773 and 1784 that gave itself the power to control company policies. The East India then fought a series of Anglo-Mysore wars in Southern India with the Sultanate of Mysore under Hyder Ali and then Tipu Sultan. Defeats in the First Anglo-Mysore war and stalemate in the Second were followed by victories in the Third and the Fourth. Following Tipu Sultan's death in the fourth war in the Siege of Seringapatam (1799), the kingdom would become a protectorate of the company. The East India Company fought three Anglo-Maratha Wars with the Maratha Confederacy. The First Anglo-Maratha War ended in 1782 with a restoration of the pre-war status quo. The Second and Third Anglo-Maratha wars resulted in British victories. After the Surrender of Peshwa Bajirao II on 1818, the East India company acquired control of a large majority of the Indian Subcontinent. Until 1858, however, much of India was still officially the dominion of the Mughal emperor. Anger among some social groups, however, was seething under the governor-generalship of James Dalhousie (1847–1856), who annexed the Punjab (1849) after victory in the Second Sikh War, annexed seven princely states using the doctrine of lapse, annexed the key state of Oudh on the basis of misgovernment, and upset cultural sensibilities by banning Hindu practices such as sati The 1857 Indian Rebellion, an uprising initiated by Indian troops, called sepoys, who formed the bulk of the company's armed forces, was the key turning point. Rumour had spread among them that their bullet cartridges were lubricated with pig and cow fat. The cartridges had to be bit open, so this upset the Hindu and Muslim soldiers. The Hindu religion held cows sacred, and for Muslims pork was considered haraam. In one camp, 85 out of 90 sepoys would not accept the cartridges from their garrison officer. The British harshly punished those who would not by jailing them. The Indian people were outraged, and on May 10, 1857, sepoys marched to Delhi, and, with the help of soldiers stationed there, captured it. Fortunately for the British, many areas remained loyal and quiescent, allowing the revolt to be crushed after fierce fighting. One important consequence of the revolt was the final collapse of the Mughal dynasty. The mutiny also ended the system of dual control under which the British government and the British East India Company shared authority. The government relieved the company of its political responsibilities, and in 1858, after 258 years of existence, the company relinquished its role. Trained civil servants were recruited from graduates of British universities, and these men set out to rule India. Lord Canning (created earl in 1859), appointed Governor-General of India in 1856, became known as "Clemency Canning" as a term of derision for his efforts to restrain revenge against the Indians during the Indian Mutiny. When the Government of India was transferred from the company to the Crown, Canning became the first viceroy of India. The Company initiated the first of the Anglo-Burmese Wars in 1824, which led to total annexation of Burma by the Crown in 1885. The British ruled Burma as a province of British India until 1937, then administered her separately under the Burma Office except during the Japanese occupation of Burma, 1942–1945, until granted independence on 4 January 1948. (Unlike India, Burma opted not to join the Commonwealth of Nations.) The denial of equal status to Indians was the immediate stimulus for the formation in 1885 of the Indian National Congress, initially loyal to the Empire but committed from 1905 to increased self-government and by 1930 to outright independence. The "Home charges", payments transferred from India for administrative costs, were a lasting source of nationalist grievance, though the flow declined in relative importance over the decades to independence in 1947. Although majority Hindu and minority Muslim political leaders were able to collaborate closely in their criticism of British policy into the 1920s, British support for a distinct Muslim political organisation, the Muslim League from 1906 and insistence from the 1920s on separate electorates for religious minorities, is seen by many in India as having contributed to Hindu-Muslim discord and the country's eventual Partition. France, which had lost its empire to the British by the end of the 18th century, had little geographical or commercial basis for expansion in Southeast Asia. After the 1850s, French imperialism was initially impelled by a nationalistic need to rival the United Kingdom and was supported intellectually by the notion that French culture was superior to that of the people of Annam (Vietnam), and its mission civilisatrice—or its "civilizing mission" of the Annamese through their assimilation to French culture and the Catholic religion. The pretext for French expansionism in Indochina was the protection of French religious missions in the area, coupled with a desire to find a southern route to China through Tonkin, the European name for a region of northern Vietnam. French religious and commercial interests were established in Indochina as early as the 17th century, but no concerted effort at stabilizing the French position was possible in the face of British strength in the Indian Ocean and French defeat in Europe at the beginning of the 19th century. A mid-19th century religious revival under the Second Empire provided the atmosphere within which interest in Indochina grew. Anti-Christian persecutions in the Far East provided the pretext for the bombardment of Tourane (Danang) in 1847, and invasion and occupation of Danang in 1857 and Saigon in 1858. Under Napoleon III, France decided that French trade with China would be surpassed by the British, and accordingly the French joined the British against China in the Second Opium War from 1857 to 1860, and occupied parts of Vietnam as its gateway to China. By the Treaty of Saigon in 1862, on June 5, the Vietnamese emperor ceded France three provinces of southern Vietnam to form the French colony of Cochinchina; France also secured trade and religious privileges in the rest of Vietnam and a protectorate over Vietnam's foreign relations. Gradually French power spread through exploration, the establishment of protectorates, and outright annexations. Their seizure of Hanoi in 1882 led directly to war with China (1883–1885), and the French victory confirmed French supremacy in the region. France governed Cochinchina as a direct colony, and central and northern Vietnam under the protectorates of Annam and Tonkin, and Cambodia as protectorates in one degree or another. Laos too was soon brought under French "protection". By the beginning of the 20th century, France had created an empire in Indochina nearly 50 percent larger than the mother country. A Governor-General in Hanoi ruled Cochinchina directly and the other regions through a system of residents. Theoretically, the French maintained the precolonial rulers and administrative structures in Annam, Tonkin, Cochinchina, Cambodia, and Laos, but in fact the governor-generalship was a centralised fiscal and administrative regime ruling the entire region. Although the surviving native institutions were preserved in order to make French rule more acceptable, they were almost completely deprived of any independence of action. The ethnocentric French colonial administrators sought to assimilate the upper classes into France's "superior culture." While the French improved public services and provided commercial stability, the native standard of living declined and precolonial social structures eroded. Indochina, which had a population of over eighteen million in 1914, was important to France for its tin, pepper, coal, cotton, and rice. It is still a matter of debate, however, whether the colony was commercially profitable. Tsarist Russia is not often regarded as a colonial power such as the United Kingdom or France because of the manner of Russian expansions: unlike the United Kingdom, which expanded overseas, the Russian Empire grew from the centre outward by a process of accretion, like the United States. In the 19th century, Russian expansion took the form of a struggle of an effectively landlocked country for access to a warm-water port. Historian Michael Khodarkovsky describes Tsarist Russia as a "hybrid empire" that combined elements of continental and colonial empires. While the British were consolidating their hold on India, Russian expansion had moved steadily eastward to the Pacific, then toward the Caucasus and Central Asia. In the early 19th century, it succeeded in conquering the South Caucasus and Dagestan from Qajar Iran following the Russo-Persian War (1804–1813), the Russo-Persian War (1826–1828) and the out coming treaties of Gulistan and Turkmenchay, giving Russia direct borders with both Persia's as well as Ottoman Turkey's heartlands. Later, they eventually reached the frontiers of Afghanistan as well (which had the largest foreign border adjacent to British holdings in India). In response to Russian expansion, the defense of India's land frontiers and the control of all sea approaches to the subcontinent via the Suez Canal, the Red Sea, and the Persian Gulf became preoccupations of British foreign policy in the 19th century. This was called the Great Game. According to Kazakh scholar Kereihan Amanzholov, Russian colonialism had "no essential difference with the colonialist policies of Britain, France, and other European powers". Anglo-Russian rivalry in the Middle East and Central Asia led to a brief confrontation over Afghanistan in the 1870s. In Persian, both nations set up banks to extend their economic influence. The United Kingdom went so far as to invade Tibet, a land subordinate to the Chinese Qing Empire, in 1904, but withdrew when it became clear that Russian influence was insignificant and when Chinese and Tibetan resistance proved tougher than expected. Qing China defeated Russia in the early Sino-Russian border conflicts, although the Russian Empire later acquired Outer Manchuria in the Amur Annexation during the Second Opium War. During the Boxer Rebellion, the Russian Empire invaded Manchuria in 1900, and the Blagoveshchensk massacre occurred against Chinese residents on the Russian side of the border. In 1907, the United Kingdom and Russia signed an agreement that, on the surface, ended their rivalry in Central Asia. (see Anglo-Russian Convention) As part of the entente, Russia agreed to deal with the sovereign of Afghanistan only through British intermediaries. In turn, the United Kingdom would not annex or occupy Afghanistan. Chinese suzerainty over Tibet also was recognised by both Russia and the United Kingdom, since nominal control by a weak China was preferable to control by either power. Persia was divided into Russian and British spheres of influence and an intervening "neutral" zone. The United Kingdom and Russia chose to reach these uneasy compromises because of growing concern on the part of both powers over German expansion in strategic areas of China and Africa. Following the entente, Russia increasingly intervened in Persian domestic politics and suppressed nationalist movements that threatened both Saint Petersburg and London. After the Russian Revolution, Russia gave up its claim to a sphere of influence, though Soviet involvement persisted alongside the United Kingdom's until the 1940s. In the Middle East, in Persia and the Ottoman Empire, a German company built a railroad from Constantinople to Baghdad and the Persian Gulf in the latter, while it built a railroad from the north of the country to the south, connecting the Caucasus with the Persian Gulf in the former. Germany wanted to gain economic influence in the region and then, perhaps, move on to India. This was met with bitter resistance by the United Kingdom, Russia, and France who divided the region among themselves. The 16th century brought many Jesuit missionaries to China, such as Matteo Ricci, who established missions where Western science was introduced, and where Europeans gathered knowledge of Chinese society, history, culture, and science. During the 18th century, merchants from Western Europe came to China in increasing numbers. However, merchants were confined to Guangzhou and the Portuguese colony of Macau, as they had been since the 16th century. European traders were increasingly irritated by what they saw as the relatively high customs duties they had to pay and by the attempts to curb the growing import trade in opium. By 1800, its importation was forbidden by the imperial government. However, the opium trade continued to boom. Early in the 19th century, serious internal weaknesses developed in the Qing dynasty that left China vulnerable to Western, Meiji period Japanese, and Russian imperialism. In 1839, China found itself fighting the First Opium War with Britain. China was defeated, and in 1842, signed the provisions of the Treaty of Nanking which were first of the unequal treaties signed during the Qing dynasty. Hong Kong Island was ceded to Britain, and certain ports, including Shanghai and Guangzhou, were opened to British trade and residence. In 1856, the Second Opium War broke out. The Chinese were again defeated, and now forced to the terms of the 1858 Treaty of Tientsin. The treaty opened new ports to trade and allowed foreigners to travel in the interior. In addition, Christians gained the right to propagate their religion. The United States Treaty of Wanghia and Russia later obtained the same prerogatives in separate treaties. Toward the end of the 19th century, China appeared on the way to territorial dismemberment and economic vassalage—the fate of India's rulers that played out much earlier. Several provisions of these treaties caused long-standing bitterness and humiliation among the Chinese: extraterritoriality (meaning that in a dispute with a Chinese person, a Westerner had the right to be tried in a court under the laws of his own country), customs regulation, and the right to station foreign warships in Chinese waters, including its navigable rivers. Jane E. Elliott criticized the allegation that China refused to modernize or was unable to defeat Western armies as simplistic, noting that China embarked on a massive military modernization in the late 1800s after several defeats, buying weapons from Western countries and manufacturing their own at arsenals, such as the Hanyang Arsenal during the Boxer Rebellion. In addition, Elliott questioned the claim that Chinese society was traumatized by the Western victories, as many Chinese peasants (90% of the population at that time) living outside the concessions continued about their daily lives, uninterrupted and without any feeling of "humiliation". Historians have judged the Qing dynasty's vulnerability and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it achieved military success against westerners on land, the historian Edward L. Dreyer said that "China’s nineteenth-century humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever they wanted to go......In the Arrow War (1856-1860), the Chinese had no way to prevent the Anglo-French expedition of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884-1885). But the defeat of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable terms." During the Sino-French War, Vietnamese forces defeated the French at the Battle of Cầu Giấy (Paper Bridge), Bắc Lệ ambush, Battle of Phu Lam Tao, Battle of Zhenhai, the Battle of Tamsui in the Keelung Campaign and in the last battle which ended the war, the Battle of Bang Bo (Zhennan Pass), which triggered the French Retreat from Lạng Sơn and resulted in the collapse of the French Jules Ferry government in the Tonkin Affair. The Qing dynasty forced Russia to hand over disputed territory in Ili in the Treaty of Saint Petersburg (1881), in what was widely seen by the west as a diplomatic victory for the Qing. Russia acknowledged that Qing China potentially posed a serious military threat. Mass media in the west during this era portrayed China as a rising military power due to its modernization programs and as a major threat to the western world, invoking fears that China would successfully conquer western colonies like Australia. The British observer Demetrius Charles de Kavanagh Boulger suggested a British-Chinese alliance to check Russian expansion in Central Asia. During the Ili crisis when Qing China threatened to go to war against Russia over the Russian occupation of Ili, the British officer Charles George Gordon was sent to China by Britain to advise China on military options against Russia should a potential war break out between China and Russia. The Russians observed the Chinese building up their arsenal of modern weapons during the Ili crisis, the Chinese bought thousands of rifles from Germany. In 1880, massive amounts of military equipment and rifles were shipped via boats to China from Antwerp as China purchased torpedoes, artillery, and 260,260 modern rifles from Europe. The Russian military observer D. V. Putiatia visited China in 1888 and found that in Northeastern China (Manchuria) along the Chinese-Russian border, the Chinese soldiers were potentially able to become adept at "European tactics" under certain circumstances, and the Chinese soldiers were armed with modern weapons like Krupp artillery, Winchester carbines, and Mauser rifles. Compared to Russian controlled areas, more benefits were given to the Muslim Kirghiz on the Chinese controlled areas. Russian settlers fought against the Muslim nomadic Kirghiz, which led the Russians to believe that the Kirghiz would be a liability in any conflict against China. The Muslim Kirghiz were sure that in an upcoming war, that China would defeat Russia. Russian sinologists, the Russian media, threat of internal rebellion, the pariah status inflicted by the Congress of Berlin, the negative state of the Russian economy all led Russia to concede and negotiate with China in St Petersburg, and return most of Ili to China. The rise of Japan since the Meiji Restoration as an imperial power led to further subjugation of China. In a dispute over China's longstanding claim of suzerainty in Korea, war broke out between China and Japan, resulting in humiliating defeat for the Chinese. By the Treaty of Shimonoseki (1895), China was forced to recognize effective Japanese rule of Korea and Taiwan was ceded to Japan until its recovery in 1945 at the end of the WWII by the Republic of China. China's defeat at the hands of Japan was another trigger for future aggressive actions by Western powers. In 1897, Germany demanded and was given a set of exclusive mining and railroad rights in Shandong province. Russia obtained access to Dairen and Port Arthur and the right to build a railroad across Manchuria, thereby achieving complete domination over a large portion of northwestern China. The United Kingdom and France also received a number of concessions. At this time, much of China was divided up into "spheres of influence": Germany had influence in Jiaozhou (Kiaochow) Bay, Shandong, and the Yellow River valley; Russia had influence in the Liaodong Peninsula and Manchuria; the United Kingdom had influence in Weihaiwei and the Yangtze Valley; and France had influence in the Guangzhou Bay and the provinces of Yunnan, Guizhou and Guangxi China continued to be divided up into these spheres until the United States, which had no sphere of influence, grew alarmed at the possibility of its businessmen being excluded from Chinese markets. In 1899, Secretary of State John Hay asked the major powers to agree to a policy of equal trading privileges. In 1900, several powers agreed to the U.S.-backed scheme, giving rise to the "Open Door" policy, denoting freedom of commercial access and non-annexation of Chinese territory. In any event, it was in the European powers' interest to have a weak but independent Chinese government. The privileges of the Europeans in China were guaranteed in the form of treaties with the Qing government. In the event that the Qing government totally collapsed, each power risked losing the privileges that it already had negotiated. The erosion of Chinese sovereignty and seizures of land from Chinese by foreigners contributed to a spectacular anti-foreign outbreak in June 1900, when the "Boxers" (properly the society of the "righteous and harmonious fists") attacked foreigners around Beijing. The Imperial Court was divided into anti-foreign and pro-foreign factions, with the pro-foreign faction led by Ronglu and Prince Qing hampering any military effort by the anti-foreign faction led by Prince Duan and Dong Fuxiang. The Qing Empress Dowager ordered all diplomatic ties to be cut off and all foreigners to leave the legations in Beijing to go to Tianjin. The foreigners refused to leave. Fueled by entirely false reports that the foreigners in the legations were massacred, the Eight-Nation Alliance decided to launch an expedition on Beijing to reach the legations but they underestimated the Qing military. The Qing and Boxers defeated the foreigners at the Seymour Expedition, forcing them to turn back at the Battle of Langfang. In response to the foreign attack on the Taku Forts the Qing responded by declaring war against the foreigners. the Qing forces and foreigners fought a fierce battle at the Battle of Tientsin before the foreigners could launch a second expedition. On their second try Gaselee Expedition, with a much larger force, the foreigners managed to reach Beijing and fight the Battle of Peking. British and French forces looted, plundered and burned the Old Summer Palace to the ground for the second time (the first time being in 1860, following the Second Opium War). German forces were particularly severe in exacting revenge for the killing of their ambassador due to the orders of Kaiser Wilhelm II, who held anti-Asian sentiments, while Russia tightened its hold on Manchuria in the northeast until its crushing defeat by Japan in the war of 1904–1905. The Qing court evacuated to Xi'an and threatened to continue the war against foreigners, until the foreigners tempered their demands in the Boxer Protocol, promising that China would not have to give up any land and gave up the demands for the execution of Dong Fuxiang and Prince Duan. The correspondent Douglas Story observed Chinese troops in 1907 and praised their abilities and military skill. Extraterritorial jurisdiction was abandoned by the United Kingdom and the United States in 1943. Chiang Kai-shek forced the French to hand over all their concessions back to China control after World War II. Foreign political control over leased parts of China ended with the incorporation of Hong Kong and the small Portuguese territory of Macau into the People's Republic of China in 1997 and 1999 respectively. Some Americans in the 19th century advocated for the annexation of Taiwan from China. Taiwanese aborigines often attacked and massacred shipwrecked western sailors. In 1867, during the Rover incident, Taiwanese aborigines attacked shipwrecked American sailors, killing the entire crew. They subsequently defeated a retaliatory expedition by the American military and killed another American during the battle. As the United States emerged as a new imperial power in the Pacific and Asia, one of the two oldest Western imperialist powers in the regions, Spain, was finding it increasingly difficult to maintain control of territories it had held in the regions since the 16th century. In 1896, a widespread revolt against Spanish rule broke out in the Philippines. Meanwhile, the recent string of U.S. territorial gains in the Pacific posed an even greater threat to Spain's remaining colonial holdings. As the U.S. continued to expand its economic and military power in the Pacific, it declared war against Spain in 1898. During the Spanish–American War, U.S. Admiral Dewey destroyed the Spanish fleet at Manila and U.S. troops landed in the Philippines. Spain later agreed by treaty to cede the Philippines in Asia and Guam in the Pacific. In the Caribbean, Spain ceded Puerto Rico to the U.S. The war also marked the end of Spanish rule in Cuba, which was to be granted nominal independence but remained heavily influenced by the U.S. government and U.S. business interests. One year following its treaty with Spain, the U.S. occupied the small Pacific outpost of Wake Island. The Filipinos, who assisted U.S. troops in fighting the Spanish, wished to establish an independent state and, on June 12, 1898, declared independence from Spain. In 1899, fighting between the Filipino nationalists and the U.S. broke out; it took the U.S. almost fifteen years to fully subdue the insurgency. The U.S. sent 70,000 troops and suffered thousands of casualties. The Filipinos insurgents, however, suffered considerably higher casualties than the Americans. Most casualties in the war were civilians dying primarily from disease and famine. U.S. counter-insurgency operations in rural areas often included scorched earth tactics which involved burning down villages and concentrating civilians into camps known as "protected zones". The execution of U.S. soldiers taken prisoner by the Filipinos led to disproportionate reprisals by American forces. The Moro Muslims fought against the Americans in the Moro Rebellion. In 1914, Dean C. Worcester, U.S. Secretary of the Interior for the Philippines (1901–1913) described "the regime of civilisation and improvement which started with American occupation and resulted in developing naked savages into cultivated and educated men". Nevertheless, some Americans, such as Mark Twain, deeply opposed American involvement/imperialism in the Philippines, leading to the abandonment of attempts to construct a permanent U.S. naval base and using it as an entry point to the Chinese market. In 1916, Congress guaranteed the independence of the Philippines by 1945. World War I brought about the fall of several empires in Europe. This had repercussions around the world. The defeated Central Powers included Germany and the Turkish Ottoman Empire. Germany lost all of its colonies in Asia. German New Guinea, a part of Papua New Guinea, became administered by Australia. German possessions and concessions in China, including Qingdao, became the subject of a controversy during the Paris Peace Conference when the Beiyang government in China agreed to cede these interests to Japan, to the anger of many Chinese people. Although the Chinese diplomats refused to sign the agreement, these interests were ceded to Japan with the support of the United States and the United Kingdom. Turkey gave up her provinces; Syria, Palestine, and Mesopotamia (now Iraq) came under French and British control as League of Nations Mandates. The discovery of petroleum first in Iran and then in the Arab lands in the interbellum provided a new focus for activity on the part of the United Kingdom, France, and the United States. In 1641, all Westerners were thrown out of Japan. For the next two centuries, Japan was free from Western contact, except for at the port of Nagasaki, which Japan allowed Dutch merchant vessels to enter on a limited basis. Japan's freedom from Western contact ended on 8 July 1853, when Commodore Matthew Perry of the U.S. Navy sailed a squadron of black-hulled warships into Edo (modern Tokyo) harbor. The Japanese told Perry to sail to Nagasaki but he refused. Perry sought to present a letter from U.S. President Millard Fillmore to the emperor which demanded concessions from Japan. Japanese authorities responded by stating that they could not present the letter directly to the emperor, but scheduled a meeting on 14 July with a representative of the emperor. On 14 July, the squadron sailed towards the shore, giving a demonstration of their cannon's firepower thirteen times. Perry landed with a large detachment of Marines and presented the emperor's representative with Fillmore's letter. Perry said he would return, and did so, this time with even more war ships. The U.S. show of force led to Japan's concession to the Convention of Kanagawa on 31 March 1854. This treaty conferred extraterritoriality on American nationals, as well as, opening up further treaty ports beyond Nagasaki. This treaty was followed up by similar treaties with the United Kingdom, the Netherlands, Russia and France. These events made Japanese authorities aware that the country was lacking technologically and needed the strength of industrialism in order to keep their power. This realisation eventually led to a civil war and political reform known the Meiji Restoration. The Meiji Restoration of 1868 led to administrative overhaul, deflation and subsequent rapid economic development. Japan had limited natural resources of her own and sought both overseas markets and sources of raw materials, fuelling a drive for imperial conquest which began with the defeat of China in 1895. Taiwan, ceded by Qing dynasty China, became the first Japanese colony. In 1899, Japan won agreements from the great powers' to abandon extraterritoriality for their citizens, and an alliance with the United Kingdom established it in 1902 as an international power. Its spectacular defeat of Russia's navy in 1905 gave it the southern half of the island of Sakhalin; exclusive Japanese influence over Korea (propinquity); the former Russian lease of the Liaodong Peninsula with Port Arthur (Lüshunkou); and extensive rights in Manchuria (see the Russo-Japanese War). The Empire of Japan and the Joseon dynasty in Korea formed bilateral diplomatic relations in 1876. China lost its suzerainty of Korea after defeat in the Sino-Japanese War in 1894. Russia also lost influence on the Korean peninsula with the Treaty of Portsmouth as a result of the Russo-Japanese war in 1904. The Joseon dynasty became increasingly dependent on Japan. Korea became a protectorate of Japan with the Japan–Korea Treaty of 1905. Korea was then de jure annexed to Japan with the Japan–Korea Treaty of 1910. Japan was now one of the most powerful forces in the Far East, and in 1914, it entered World War I on the side of the Allies, seizing German-occupied Kiaochow and subsequently demanding Chinese acceptance of Japanese political influence and territorial acquisitions (Twenty-One Demands, 1915). Mass protests in Peking in 1919 which sparked widespread Chinese nationalism, coupled with Allied (and particularly U.S.) opinion led to Japan's abandonment of most of the demands and Kiaochow's 1922 return to China. Japan received the German territory from the Treaty of Versailles. Tensions with China increased over the 1920s, and in 1931 Japanese Kwantung Army based in Manchuria seized control of the region without admission from Tokyo. Intermittent conflict with China led to full-scale war in mid-1937, drawing Japan toward an overambitious bid for Asian hegemony (Greater East Asia Co-Prosperity Sphere), which ultimately led to defeat and the loss of all its overseas territories after World War II (see Japanese expansionism and Japanese nationalism). In the aftermath of World War II, European colonies, controlling more than one billion people throughout the world, still ruled most of the Middle East, South East Asia, and the Indian Subcontinent. However, the image of European pre-eminence was shattered by the wartime Japanese occupations of large portions of British, French, and Dutch territories in the Pacific. The destabilisation of European rule led to the rapid growth of nationalist movements in Asia—especially in Indonesia, Malaya, Burma, and French Indochina (Vietnam, Cambodia, and Laos). The war, however, only accelerated forces already in existence undermining Western imperialism in Asia. Throughout the colonial world, the processes of urbanisation and capitalist investment created professional merchant classes that emerged as new Westernised elites. While imbued with Western political and economic ideas, these classes increasingly grew to resent their unequal status under European rule. In India, the westward movement of Japanese forces towards Bengal during World War II had led to major concessions on the part of British authorities to Indian nationalist leaders. In 1947, the United Kingdom, devastated by war and embroiled in an economic crisis at home, granted British India its independence as two nations: India and Pakistan. Myanmar (Burma) and Sri Lanka (Ceylon), which is also part of British India, also gained their independence from the United Kingdom the following year, in 1948. In the Middle East, the United Kingdom granted independence to Jordan in 1946 and two years later, in 1948, ended its mandate of Palestine becoming the independent nation of Israel. Following the end of the war, nationalists in Indonesia demanded complete independence from the Netherlands. A brutal conflict ensued, and finally, in 1949, through United Nations mediation, the Dutch East Indies achieved independence, becoming the new nation of Indonesia. Dutch imperialism moulded this new multi-ethnic state comprising roughly 3,000 islands of the Indonesian archipelago with a population at the time of over 100 million. The end of Dutch rule opened up latent tensions between the roughly 300 distinct ethnic groups of the islands, with the major ethnic fault line being between the Javanese and the non-Javanese. Dutch New Guinea was under the Dutch administration until 1962 (see also West New Guinea dispute). In the Philippines, the U.S. remained committed to its previous pledges to grant the islands their independence, and the Philippines became the first of the Western-controlled Asian colonies to be granted independence post-World War II. However, the Philippines remained under pressure to adopt a political and economic system similar to the U.S. This aim was greatly complicated by the rise of new political forces. During the war, the Hukbalahap (People's Army), which had strong ties to the Communist Party of the Philippines (PKP), fought against the Japanese occupation of the Philippines and won strong popularity among many sectors of the Filipino working class and peasantry. In 1946, the PKP participated in elections as part of the Democratic Alliance. However, with the onset of the Cold War, its growing political strength drew a reaction from the ruling government and the United States, resulting in the repression of the PKP and its associated organizations. In 1948, the PKP began organizing an armed struggle against the government and continued U.S. military presence. In 1950, the PKP created the People's Liberation Army (Hukbong Mapagpalaya ng Bayan), which mobilized thousands of troops throughout the islands. The insurgency lasted until 1956 when the PKP gave up armed struggle. In 1968, the PKP underwent a split, and in 1969 the Maoist faction of the PKP created the New People's Army. Maoist rebels re-launched an armed struggle against the government and the U.S. military presence in the Philippines, which continues to this day. France remained determined to retain its control of Indochina. However, in Hanoi, in 1945, a broad front of nationalists and communists led by Ho Chi Minh declared an independent Democratic Republic of Vietnam, commonly referred to as the Viet Minh regime by Western outsiders. France, seeking to regain control of Vietnam, countered with a vague offer of self-government under French rule. France's offers were unacceptable to Vietnamese nationalists; and in December 1946 the Việt Minh launched a rebellion against the French authority governing the colonies of French Indochina. The first few years of the war involved a low-level rural insurgency against French authority. However, after the Chinese communists reached the Northern border of Vietnam in 1949, the conflict turned into a conventional war between two armies equipped with modern weapons supplied by the United States and the Soviet Union. Meanwhile, the France granted the State of Vietnam based in Saigon independence in 1949 while Laos and Cambodia received independence in 1953. The US recognized the regime in Saigon, and provided the French military effort with military aid. Meanwhile, in Vietnam, the French war against the Viet Minh continued for nearly eight years. The French were gradually worn down by guerrilla and jungle fighting. The turning point for France occurred at Dien Bien Phu in 1954, which resulted in the surrender of ten thousand French troops. Paris was forced to accept a political settlement that year at the Geneva Conference, which led to a precarious set of agreements regarding the future political status of Laos, Cambodia, and Vietnam. British colonies in East Asia, South Asia, and Southeast Asia: French colonies in South and Southeast Asia: Dutch, British, Spanish, Portuguese colonies and Russian territories in Asia:
[ { "paragraph_id": 0, "text": "The influence and imperialism of Western Europe and associated states (such as Russia, Japan, and the United States) peaked in Asian territories from the colonial period beginning in the 16th century and substantially reducing with 20th century decolonization. It originated in the 15th-century search for trade routes to the Indian subcontinent and Southeast Asia that led directly to the Age of Discovery, and additionally the introduction of early modern warfare into what Europeans first called the East Indies and later the Far East. By the early 16th century, the Age of Sail greatly expanded Western European influence and development of the spice trade under colonialism. European-style colonial empires and imperialism operated in Asia throughout six centuries of colonialism, formally ending with the independence of the Portuguese Empire's last colony Macau in 1999. The empires introduced Western concepts of nation and the multinational state. This article attempts to outline the consequent development of the Western concept of the nation state.", "title": "" }, { "paragraph_id": 1, "text": "European political power, commerce, and culture in Asia gave rise to growing trade in commodities—a key development in the rise of today's modern world free market economy. In the 16th century, the Portuguese broke the (overland) monopoly of the Arabs and Italians in trade between Asia and Europe by the discovery of the sea route to India around the Cape of Good Hope. The ensuing rise of the rival Dutch East India Company gradually eclipsed Portuguese influence in Asia. Dutch forces first established independent bases in the East (most significantly Batavia, the heavily fortified headquarters of the Dutch East India Company) and then between 1640 and 1660 wrested Malacca, Ceylon, some southern Indian ports, and the lucrative Japan trade from the Portuguese. Later, the English and the French established settlements in India and trade with China and their acquisitions would gradually surpass those of the Dutch. Following the end of the Seven Years' War in 1763, the British eliminated French influence in India and established the British East India Company (founded in 1600) as the most important political force on the Indian subcontinent.", "title": "" }, { "paragraph_id": 2, "text": "Before the Industrial Revolution in the mid-to-late 19th century, demand for oriental goods such as porcelain, silk, spices, and tea remained the driving force behind European imperialism. The Western European stake in Asia remained confined largely to trading stations and strategic outposts necessary to protect trade. Industrialization, however, dramatically increased European demand for Asian raw materials; with the severe Long Depression of the 1870s provoking a scramble for new markets for European industrial products and financial services in Africa, the Americas, Eastern Europe, and especially in Asia. This scramble coincided with a new era in global colonial expansion known as \"the New Imperialism\", which saw a shift in focus from trade and indirect rule to formal colonial control of vast overseas territories ruled as political extensions of their mother countries. Between the 1870s and the beginning of World War I in 1914, the United Kingdom, France, and the Netherlands—the established colonial powers in Asia—added to their empires vast expanses of territory in the Middle East, the Indian Subcontinent, and Southeast Asia. In the same period, the Empire of Japan, following the Meiji Restoration; the German Empire, following the end of the Franco-Prussian War in 1871; Tsarist Russia; and the United States, following the Spanish–American War in 1898, quickly emerged as new imperial powers in East Asia and in the Pacific Ocean area.", "title": "" }, { "paragraph_id": 3, "text": "In Asia, World War I and World War II were played out as struggles among several key imperial power, with conflicts involving the European powers along with Russia and the rising American and Japanese. None of the colonial powers, however, possessed the resources to withstand the strains of both World Wars and maintain their direct rule in Asia. Although nationalist movements throughout the colonial world led to the political independence of nearly all of Asia's remaining colonies, decolonization was intercepted by the Cold War. South East Asia, South Asia, the Middle East, and East Asia remained embedded in a world economic, financial, and military system in which the great powers compete to extend their influence. However, the rapid post-war economic development and rise of the industrialized developed countries of Taiwan, Singapore, South Korea, Japan and the developing countries of India, the People's Republic of China and its autonomous territory of Hong Kong, along with the collapse of the Soviet Union, have greatly diminished Western European influence in Asia. The United States remains influential with trade and military bases in Asia.", "title": "" }, { "paragraph_id": 4, "text": "European exploration of Asia started in ancient Roman times along the Silk Road. The Romans had knowledge of lands as distant as China. Trade with India through the Roman Egyptian Red Sea ports was significant in the first centuries of the Common Era.", "title": "Early European exploration of Asia" }, { "paragraph_id": 5, "text": "In the 13th and 14th centuries, a number of Europeans, many of them Christian missionaries, had sought to penetrate into China. The most famous of these travelers was Marco Polo. But these journeys had little permanent effect on east–west trade because of a series of political developments in Asia in the last decades of the 14th century, which put an end to further European exploration of Asia. The Yuan dynasty in China, which had been receptive to European missionaries and merchants, was overthrown, and the new Ming rulers were found to be unreceptive of religious proselytism. Meanwhile, the Turks consolidated control over the eastern Mediterranean, closing off key overland trade routes. Thus, until the 15th century, only minor trade and cultural exchanges between Europe and Asia continued at certain terminals controlled by Muslim traders.", "title": "Early European exploration of Asia" }, { "paragraph_id": 6, "text": "Western European rulers determined to find new trade routes of their own. The Portuguese spearheaded the drive to find oceanic routes that would provide cheaper and easier access to South and East Asian goods. This chartering of oceanic routes between East and West began with the unprecedented voyages of Portuguese and Spanish sea captains. Their voyages were influenced by medieval European adventurers, who had journeyed overland to the Far East and contributed to geographical knowledge of parts of Asia upon their return.", "title": "Early European exploration of Asia" }, { "paragraph_id": 7, "text": "In 1488, Bartolomeu Dias rounded the southern tip of Africa under the sponsorship of Portugal's John II, from which point he noticed that the coast swung northeast (Cape of Good Hope). While Dias' crew forced him to turn back, by 1497, Portuguese navigator Vasco da Gama made the first open voyage from Europe to India. In 1520, Ferdinand Magellan, a Portuguese navigator in the service of the Crown of Castile ('Spain'), found a sea route into the Pacific Ocean.", "title": "Early European exploration of Asia" }, { "paragraph_id": 8, "text": "In 1509, the Portuguese under Francisco de Almeida won the decisive battle of Diu against a joint Mamluk and Arab fleet sent to expel the Portuguese of the Arabian Sea. The victory enabled Portugal to implement its strategy of controlling the Indian Ocean.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 9, "text": "Early in the 16th century, Afonso de Albuquerque emerged as the Portuguese colonial viceroy most instrumental in consolidating Portugal's holdings in Africa and in Asia. He understood that Portugal could wrest commercial supremacy from the Arabs only by force, and therefore devised a plan to establish forts at strategic sites which would dominate the trade routes and also protect Portuguese interests on land. In 1510, he conquered Goa in India, which enabled him to gradually consolidate control of most of the commercial traffic between Europe and Asia, largely through trade; Europeans started to carry on trade from forts, acting as foreign merchants rather than as settlers. In contrast, early European expansion in the \"West Indies\", (later known to Europeans as a separate continent from Asia that they would call the \"Americas\") following the 1492 voyage of Christopher Columbus, involved heavy settlement in colonies that were treated as political extensions of the mother countries.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 10, "text": "Lured by the potential of high profits from another expedition, the Portuguese established a permanent base in Cochin, south of the Indian trade port of Calicut in the early 16th century. In 1510, the Portuguese, led by Afonso de Albuquerque, seized Goa on the coast of India, which Portugal held until 1961, along with Diu and Daman (the remaining territory and enclaves in India from a former network of coastal towns and smaller fortified trading ports added and abandoned or lost centuries before). The Portuguese soon acquired a monopoly over trade in the Indian Ocean.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 11, "text": "Portuguese viceroy Albuquerque (1509–1515) resolved to consolidate Portuguese holdings in Africa and Asia, and secure control of trade with the East Indies and China. His first objective was Malacca, which controlled the narrow strait through which most Far Eastern trade moved. Captured in 1511, Malacca became the springboard for further eastward penetration, starting with the voyage of António de Abreu and Francisco Serrão in 1512, ordered by Albuquerque, to the Moluccas. Years later the first trading posts were established in the Moluccas, or \"Spice Islands\", which was the source for some of the world's most hotly demanded spices, and from there, in Makassar and some others, but smaller, in the Lesser Sunda Islands. By 1513–1516, the first Portuguese ships had reached Canton on the southern coasts of China.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 12, "text": "In 1513, after the failed attempt to conquer Aden, Albuquerque entered with an armada, for the first time for Europeans by the ocean via, on the Red Sea; and in 1515, Albuquerque consolidated the Portuguese hegemony in the Persian Gulf gates, already begun by him in 1507, with the domain of Muscat and Ormuz. Shortly after, other fortified bases and forts were annexed and built along the Gulf, and in 1521, through a military campaign, the Portuguese annexed Bahrain.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 13, "text": "The Portuguese conquest of Malacca triggered the Malayan–Portuguese war. In 1521, Ming dynasty China defeated the Portuguese at the Battle of Tunmen and then defeated the Portuguese again at the Battle of Xicaowan. The Portuguese tried to establish trade with China by illegally smuggling with the pirates on the offshore islands off the coast of Zhejiang and Fujian, but they were driven away by the Ming navy in the 1530s-1540s.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 14, "text": "In 1557, China decided to lease Macau to the Portuguese as a place where they could dry goods they transported on their ships, which they held until 1999. The Portuguese, based at Goa and Malacca, had now established a lucrative maritime empire in the Indian Ocean meant to monopolize the spice trade. The Portuguese also began a channel of trade with the Japanese, becoming the first recorded Westerners to have visited Japan. This contact introduced Christianity and firearms into Japan.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 15, "text": "In 1505, (also possibly before, in 1501), the Portuguese, through Lourenço de Almeida, the son of Francisco de Almeida, reached Ceylon. The Portuguese founded a fort at the city of Colombo in 1517 and gradually extended their control over the coastal areas and inland. In a series of military conflicts and political maneuvers, the Portuguese extended their control over the Sinhalese kingdoms, including Jaffna (1591), Raigama (1593), Sitawaka (1593), and Kotte (1594)- However, the aim of unifying the entire island under Portuguese control faced the Kingdom of Kandy`s fierce resistance. The Portuguese, led by Pedro Lopes de Sousa, launched a full-scale military invasion of the kingdom of Kandy in the Campaign of Danture of 1594. The invasion was a disaster for the Portuguese, with their entire army wiped out by Kandyan guerrilla warfare. Constantino de Sá, romantically celebrated in the 17th century Sinhalese Epic (also for its greater humanism and tolerance compared to other governors) led the last military operation that also ended in disaster. He died in the Battle of Randeniwela, refusing to abandon his troops in the face of total annihilation.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 16, "text": "The energies of Castile (later, the unified Spain), the other major colonial power of the 16th century, were largely concentrated on the Americas, not South and East Asia, but the Spanish did establish a footing in the Far East in the Philippines. After fighting with the Portuguese by the Spice Islands since 1522 and the agreement between the two powers in 1529 (in the treaty of Zaragoza), the Spanish, led by Miguel López de Legazpi, settled and conquered gradually the Philippines since 1564. After the discovery of the return voyage to the Americas by Andres de Urdaneta in 1565, cargoes of Chinese goods were transported from the Philippines to Mexico and from there to Spain. By this long route, Spain reaped some of the profits of Far Eastern commerce. Spanish officials converted the islands to Christianity and established some settlements, permanently establishing the Philippines as the area of East Asia most oriented toward the West in terms of culture and commerce. The Moro Muslims fought against the Spanish for over three centuries in the Spanish–Moro conflict.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 17, "text": "The lucrative trade was vastly expanded when the Portuguese began to export slaves from Africa in 1541; however, over time, the rise of the slave trade left Portugal over-extended, and vulnerable to competition from other Western European powers. Envious of Portugal's control of trade routes, other Western European nations—mainly the Netherlands, France, and England—began to send in rival expeditions to Asia. In 1642, the Dutch drove the Portuguese out of the Gold Coast in Africa, the source of the bulk of Portuguese slave laborers, leaving this rich slaving area to other Europeans, especially the Dutch and the English.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 18, "text": "Rival European powers began to make inroads in Asia as the Portuguese and Spanish trade in the Indian Ocean declined primarily because they had become hugely over-stretched financially due to the limitations on their investment capacity and contemporary naval technology. Both of these factors worked in tandem, making control over Indian Ocean trade extremely expensive.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 19, "text": "The existing Portuguese interests in Asia proved sufficient to finance further colonial expansion and entrenchment in areas regarded as of greater strategic importance in Africa and Brazil. Portuguese maritime supremacy was lost to the Dutch in the 17th century, and with this came serious challenges for the Portuguese. However, they still clung to Macau and settled a new colony on the island of Timor. It was as recent as the 1960s and 1970s that the Portuguese began to relinquish their colonies in Asia. Goa was invaded by India in 1961 and became an Indian state in 1987; Portuguese Timor was abandoned in 1975 and was then invaded by Indonesia. It became an independent country in 2002, and Macau was handed back to the Chinese as per a treaty in 1999.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 20, "text": "The arrival of the Portuguese and Spanish and their holy wars against Muslim states in the Malayan–Portuguese war, Spanish–Moro conflict and Castilian War inflamed religious tensions and turned Southeast Asia into an arena of conflict between Muslims and Christians. The Brunei Sultanate's capital at Kota Batu was assaulted by Governor Sande who led the 1578 Spanish attack.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 21, "text": "The word \"savages\" in Spanish, cafres, was from the word \"infidel\" in Arabic - Kafir, and was used by the Spanish to refer to their own \"Christian savages\" who were arrested in Brunei. It was said Castilians are kafir, men who have no souls, who are condemned by fire when they die, and that too because they eat pork by the Brunei Sultan after the term accursed doctrine was used to attack Islam by the Spaniards which fed into hatred between Muslims and Christians sparked by their 1571 war against Brunei. The Sultan's words were in response to insults coming from the Spanish at Manila in 1578, other Muslims from Champa, Java, Borneo, Luzon, Pahang, Demak, Aceh, and the Malays echoed the rhetoric of holy war against the Spanish and Iberian Portuguese, calling them kafir enemies which was a contrast to their earlier nuanced views of the Portuguese in the Hikayat Tanah Hitu and Sejarah Melayu. The war by Spain against Brunei was defended in an apologia written by Doctor De Sande. The British eventually partitioned and took over Brunei while Sulu was attacked by the British, Americans, and Spanish which caused its breakdown and downfall after both of them thrived from 1500 to 1900 for four centuries. Dar al-Islam was seen as under invasion by \"kafirs\" by the Atjehnese led by Zayn al-din and by Muslims in the Philippines as they saw the Spanish invasion, since the Spanish brought the idea of a crusader holy war against Muslim Moros just as the Portuguese did in Indonesia and India against what they called \"Moors\" in their political and commercial conquests which they saw through the lens of religion in the 16th century.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 22, "text": "In 1578, an attack was launched by the Spanish against Jolo, and in 1875 it was destroyed at their hands, and once again in 1974 it was destroyed by the Philippines. The Spanish first set foot on Borneo in Brunei.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 23, "text": "The Spanish war against Brunei failed to conquer Brunei but it totally cut off the Philippines from Brunei's influence, the Spanish then started colonizing Mindanao and building fortresses. In response, the Bisayas, where Spanish forces were stationed, were subjected to retaliatory attacks by the Magindanao in 1599-1600 due to the Spanish attacks on Mindanao.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 24, "text": "The Brunei royal family was related to the Muslim Rajahs who in ruled the principality in 1570 of Manila (Kingdom of Maynila) and this was what the Spaniards came across on their initial arrival to Manila, Spain uprooted Islam out of areas where it was shallow after they began to force Christianity on the Philippines in their conquests after 1521 while Islam was already widespread in the 16th century Philippines. In the Philippines in the Cebu islands the natives killed the Spanish fleet leader Magellan. Borneo's western coastal areas at Landak, Sukadana, and Sambas saw the growth of Muslim states in the sixteenth century, in the 15th century at Nanking, the capital of China, the death and burial of the Borneo Bruneian king Maharaja Kama took place upon his visit to China with Zheng He's fleet.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 25, "text": "The Spanish were expelled from Brunei in 1579 after they attacked in 1578. There were fifty thousand inhabitants before the 1597 attack by the Spanish in Brunei.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 26, "text": "During first contact with China, numerous aggressions and provocations were undertaken by the Portuguese They believed they could mistreat the non-Christians because they themselves were Christians and acted in the name of their religion in committing crimes and atrocities. This resulted in the Battle of Xicaowan where the local Chinese navy defeated and captured a fleet of Portuguese caravels.", "title": "Portuguese and Spanish trade and colonization in Asia" }, { "paragraph_id": 27, "text": "The Portuguese decline in Asia was accelerated by attacks on their commercial empire by the Dutch and the English, which began a global struggle over the empire in Asia that lasted until the end of the Seven Years' War in 1763. The Netherlands revolt against Spanish rule facilitated Dutch encroachment on the Portuguese monopoly over South and East Asian trade. The Dutch looked on Spain's trade and colonies as potential spoils of war. When the two crowns of the Iberian peninsula were joined in 1581, the Dutch felt free to attack Portuguese territories in Asia.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 28, "text": "By the 1590s, a number of Dutch companies were formed to finance trading expeditions in Asia. Because competition lowered their profits, and because of the doctrines of mercantilism, in 1602 the companies united into a cartel and formed the Dutch East India Company, and received from the government the right to trade and colonize territory in the area stretching from the Cape of Good Hope eastward to the Strait of Magellan.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 29, "text": "In 1605, armed Dutch merchants captured the Portuguese fort at Amboyna in the Moluccas, which was developed into the company's first secure base. Over time, the Dutch gradually consolidated control over the great trading ports of the East Indies. This control allowed the company to monopolise the world spice trade for decades. Their monopoly over the spice trade became complete after they drove the Portuguese from Malacca in 1641 and Ceylon in 1658.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 30, "text": "Dutch East India Company colonies or outposts were later established in Atjeh (Aceh), 1667; Macassar, 1669; and Bantam, 1682. The company established its headquarters at Batavia (today Jakarta) on the island of Java. Outside the East Indies, the Dutch East India Company colonies or outposts were also established in Persia (Iran), Bengal (now Bangladesh and part of India), Mauritius (1638-1658/1664-1710), Siam (now Thailand), Guangzhou (Canton, China), Taiwan (1624–1662), and southern India (1616–1795).", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 31, "text": "Ming dynasty China defeated the Dutch East India Company in the Sino-Dutch conflicts. The Chinese first defeated and drove the Dutch out of the Pescadores in 1624. The Ming navy under Zheng Zhilong defeated the Dutch East India Company's fleet at the 1633 Battle of Liaoluo Bay. In 1662, Zheng Zhilong's son Zheng Chenggong (also known as Koxinga) expelled the Dutch from Taiwan after defeating them in the siege of Fort Zeelandia. (see History of Taiwan) Further, the Dutch East India Company trade post on Dejima (1641–1857), an artificial island off the coast of Nagasaki, was for a long time the only place where Europeans could trade with Japan.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 32, "text": "The Vietnamese Nguyễn lords defeated the Dutch in a naval battle in 1643.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 33, "text": "The Cambodians defeated the Dutch in the Cambodian–Dutch War in 1644.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 34, "text": "In 1652, Jan van Riebeeck established an outpost at the Cape of Good Hope (the southwestern tip of Africa, currently in South Africa) to restock company ships on their journey to East Asia. This post later became a fully-fledged colony, the Cape Colony (1652–1806). As Cape Colony attracted increasing Dutch and European settlement, the Dutch founded the city of Kaapstad (Cape Town).", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 35, "text": "By 1669, the Dutch East India Company was the richest private company in history, with a huge fleet of merchant ships and warships, tens of thousands of employees, a private army consisting of thousands of soldiers, and a reputation on the part of its stockholders for high dividend payments.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 36, "text": "The company was in almost constant conflict with the English; relations were particularly tense following the Amboyna Massacre in 1623. During the 18th century, Dutch East India Company possessions were increasingly focused on the East Indies. After the fourth war between the United Provinces and England (1780–1784), the company suffered increasing financial difficulties. In 1799, the company was dissolved, commencing official colonisation of the East Indies. During the era of New Imperialism the territorial claims of the Dutch East India Company (VOC) expanded into a fully fledged colony named the Dutch East Indies. Partly driven by re-newed colonial aspirations of fellow European nation states the Dutch strived to establish unchallenged control of the archipelago now known as Indonesia.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 37, "text": "Six years into formal colonisation of the East Indies, in Europe the Dutch Republic was occupied by the French forces of Napoleon. The Dutch government went into exile in England and formally ceded its colonial possessions to Great Britain. The pro-French Governor General of Java Jan Willem Janssens, resisted a British invasion force in 1811 until forced to surrender. British Governor Raffles, who the later founded the city of Singapore, ruled the colony the following 10 years of the British interregnum (1806–1816).", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 38, "text": "After the defeat of Napoleon and the Anglo-Dutch Treaty of 1814 colonial government of the East Indies was ceded back to the Dutch in 1817. The loss of South Africa and the continued scramble for Africa stimulated the Dutch to secure unchallenged dominion over its colony in the East Indies. The Dutch started to consolidate its power base through extensive military campaigns and elaborate diplomatic alliances with indigenous rulers ensuring the Dutch tricolor was firmly planted in all corners of the Archipelago. These military campaigns included: the Padri War (1821–1837), the Java War (1825–1830) and the Aceh War (1873–1904). This raised the need for a considerable military buildup of the colonial army (KNIL). From all over Europe soldiers were recruited to join the KNIL.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 39, "text": "The Dutch concentrated their colonial enterprise in the Dutch East Indies (Indonesia) throughout the 19th century. The Dutch lost control over the East Indies to the Japanese during much of World War II. Following the war, the Dutch fought Indonesian independence forces after Japan surrendered to the Allies in 1945. In 1949, most of what was known as the Dutch East Indies was ceded to the independent Republic of Indonesia. In 1962, also Dutch New Guinea was annexed by Indonesia de facto ending Dutch imperialism in Asia.", "title": "Dutch trade and colonization in Asia" }, { "paragraph_id": 40, "text": "The English sought to stake out claims in India at the expense of the Portuguese dating back to the Elizabethan era. In 1600, Queen Elizabeth I incorporated the English East India Company (later the British East India Company), granting it a monopoly of trade from the Cape of Good Hope eastward to the Strait of Magellan. In 1639, it acquired Madras on the east coast of India, where it quickly surpassed Portuguese Goa as the principal European trading Centre on the Indian Subcontinent.", "title": "British in India" }, { "paragraph_id": 41, "text": "Through bribes, diplomacy, and manipulation of weak native rulers, the company prospered in India, where it became the most powerful political force, and outrivaled its Portuguese and French competitors. For more than one hundred years, English and French trading companies had fought one another for supremacy, and, by the middle of the 18th century, competition between the British and the French had heated up. French defeat by the British under the command of Robert Clive during the Seven Years' War (1756–1763) marked the end of the French stake in India.", "title": "British in India" }, { "paragraph_id": 42, "text": "The British East India Company, although still in direct competition with French and Dutch interests until 1763, following the subjugation of Bengal at the 1757 Battle of Plassey. The British East India Company made great advances at the expense of the Mughal Empire.", "title": "British in India" }, { "paragraph_id": 43, "text": "The reign of Aurangzeb had marked the height of Mughal power. By 1690 Mughal territorial expansion reached its greatest extent encompassing the entire Indian Subcontinent. But this period of power was followed by one of decline. Fifty years after the death of Aurangzeb, the great Mughal empire had crumbled. Meanwhile, marauding warlords, nobles, and others bent on gaining power left the Subcontinent increasingly anarchic. Although the Mughals kept the imperial title until 1858, the central government had collapsed, creating a power vacuum.", "title": "British in India" }, { "paragraph_id": 44, "text": "Aside from defeating the French during the Seven Years' War, Robert Clive, the leader of the East India Company in India, defeated Siraj ud-Daulah, a key Indian ruler of Bengal, at the decisive Battle of Plassey (1757), a victory that ushered in the beginning of a new period in Indian history, that of informal British rule. While still nominally the sovereign. The transition to formal imperialism, characterized by Queen Victoria being crowned \"Empress of India\" in the 1870s, was a gradual process. The first step toward cementing formal British control extended back to the late 18th century. The British Parliament, disturbed by the idea that a great business concern, interested primarily in profit, was controlling the destinies of millions of people, passed acts in 1773 and 1784 that gave itself the power to control company policies.", "title": "British in India" }, { "paragraph_id": 45, "text": "The East India then fought a series of Anglo-Mysore wars in Southern India with the Sultanate of Mysore under Hyder Ali and then Tipu Sultan. Defeats in the First Anglo-Mysore war and stalemate in the Second were followed by victories in the Third and the Fourth. Following Tipu Sultan's death in the fourth war in the Siege of Seringapatam (1799), the kingdom would become a protectorate of the company.", "title": "British in India" }, { "paragraph_id": 46, "text": "The East India Company fought three Anglo-Maratha Wars with the Maratha Confederacy. The First Anglo-Maratha War ended in 1782 with a restoration of the pre-war status quo. The Second and Third Anglo-Maratha wars resulted in British victories. After the Surrender of Peshwa Bajirao II on 1818, the East India company acquired control of a large majority of the Indian Subcontinent.", "title": "British in India" }, { "paragraph_id": 47, "text": "Until 1858, however, much of India was still officially the dominion of the Mughal emperor. Anger among some social groups, however, was seething under the governor-generalship of James Dalhousie (1847–1856), who annexed the Punjab (1849) after victory in the Second Sikh War, annexed seven princely states using the doctrine of lapse, annexed the key state of Oudh on the basis of misgovernment, and upset cultural sensibilities by banning Hindu practices such as sati", "title": "British in India" }, { "paragraph_id": 48, "text": "The 1857 Indian Rebellion, an uprising initiated by Indian troops, called sepoys, who formed the bulk of the company's armed forces, was the key turning point. Rumour had spread among them that their bullet cartridges were lubricated with pig and cow fat. The cartridges had to be bit open, so this upset the Hindu and Muslim soldiers. The Hindu religion held cows sacred, and for Muslims pork was considered haraam. In one camp, 85 out of 90 sepoys would not accept the cartridges from their garrison officer. The British harshly punished those who would not by jailing them. The Indian people were outraged, and on May 10, 1857, sepoys marched to Delhi, and, with the help of soldiers stationed there, captured it. Fortunately for the British, many areas remained loyal and quiescent, allowing the revolt to be crushed after fierce fighting. One important consequence of the revolt was the final collapse of the Mughal dynasty. The mutiny also ended the system of dual control under which the British government and the British East India Company shared authority. The government relieved the company of its political responsibilities, and in 1858, after 258 years of existence, the company relinquished its role. Trained civil servants were recruited from graduates of British universities, and these men set out to rule India. Lord Canning (created earl in 1859), appointed Governor-General of India in 1856, became known as \"Clemency Canning\" as a term of derision for his efforts to restrain revenge against the Indians during the Indian Mutiny. When the Government of India was transferred from the company to the Crown, Canning became the first viceroy of India.", "title": "British in India" }, { "paragraph_id": 49, "text": "The Company initiated the first of the Anglo-Burmese Wars in 1824, which led to total annexation of Burma by the Crown in 1885. The British ruled Burma as a province of British India until 1937, then administered her separately under the Burma Office except during the Japanese occupation of Burma, 1942–1945, until granted independence on 4 January 1948. (Unlike India, Burma opted not to join the Commonwealth of Nations.)", "title": "British in India" }, { "paragraph_id": 50, "text": "The denial of equal status to Indians was the immediate stimulus for the formation in 1885 of the Indian National Congress, initially loyal to the Empire but committed from 1905 to increased self-government and by 1930 to outright independence. The \"Home charges\", payments transferred from India for administrative costs, were a lasting source of nationalist grievance, though the flow declined in relative importance over the decades to independence in 1947.", "title": "British in India" }, { "paragraph_id": 51, "text": "Although majority Hindu and minority Muslim political leaders were able to collaborate closely in their criticism of British policy into the 1920s, British support for a distinct Muslim political organisation, the Muslim League from 1906 and insistence from the 1920s on separate electorates for religious minorities, is seen by many in India as having contributed to Hindu-Muslim discord and the country's eventual Partition.", "title": "British in India" }, { "paragraph_id": 52, "text": "France, which had lost its empire to the British by the end of the 18th century, had little geographical or commercial basis for expansion in Southeast Asia. After the 1850s, French imperialism was initially impelled by a nationalistic need to rival the United Kingdom and was supported intellectually by the notion that French culture was superior to that of the people of Annam (Vietnam), and its mission civilisatrice—or its \"civilizing mission\" of the Annamese through their assimilation to French culture and the Catholic religion. The pretext for French expansionism in Indochina was the protection of French religious missions in the area, coupled with a desire to find a southern route to China through Tonkin, the European name for a region of northern Vietnam.", "title": "France in Indochina" }, { "paragraph_id": 53, "text": "French religious and commercial interests were established in Indochina as early as the 17th century, but no concerted effort at stabilizing the French position was possible in the face of British strength in the Indian Ocean and French defeat in Europe at the beginning of the 19th century. A mid-19th century religious revival under the Second Empire provided the atmosphere within which interest in Indochina grew. Anti-Christian persecutions in the Far East provided the pretext for the bombardment of Tourane (Danang) in 1847, and invasion and occupation of Danang in 1857 and Saigon in 1858. Under Napoleon III, France decided that French trade with China would be surpassed by the British, and accordingly the French joined the British against China in the Second Opium War from 1857 to 1860, and occupied parts of Vietnam as its gateway to China.", "title": "France in Indochina" }, { "paragraph_id": 54, "text": "By the Treaty of Saigon in 1862, on June 5, the Vietnamese emperor ceded France three provinces of southern Vietnam to form the French colony of Cochinchina; France also secured trade and religious privileges in the rest of Vietnam and a protectorate over Vietnam's foreign relations. Gradually French power spread through exploration, the establishment of protectorates, and outright annexations. Their seizure of Hanoi in 1882 led directly to war with China (1883–1885), and the French victory confirmed French supremacy in the region. France governed Cochinchina as a direct colony, and central and northern Vietnam under the protectorates of Annam and Tonkin, and Cambodia as protectorates in one degree or another. Laos too was soon brought under French \"protection\".", "title": "France in Indochina" }, { "paragraph_id": 55, "text": "By the beginning of the 20th century, France had created an empire in Indochina nearly 50 percent larger than the mother country. A Governor-General in Hanoi ruled Cochinchina directly and the other regions through a system of residents. Theoretically, the French maintained the precolonial rulers and administrative structures in Annam, Tonkin, Cochinchina, Cambodia, and Laos, but in fact the governor-generalship was a centralised fiscal and administrative regime ruling the entire region. Although the surviving native institutions were preserved in order to make French rule more acceptable, they were almost completely deprived of any independence of action. The ethnocentric French colonial administrators sought to assimilate the upper classes into France's \"superior culture.\" While the French improved public services and provided commercial stability, the native standard of living declined and precolonial social structures eroded. Indochina, which had a population of over eighteen million in 1914, was important to France for its tin, pepper, coal, cotton, and rice. It is still a matter of debate, however, whether the colony was commercially profitable.", "title": "France in Indochina" }, { "paragraph_id": 56, "text": "Tsarist Russia is not often regarded as a colonial power such as the United Kingdom or France because of the manner of Russian expansions: unlike the United Kingdom, which expanded overseas, the Russian Empire grew from the centre outward by a process of accretion, like the United States. In the 19th century, Russian expansion took the form of a struggle of an effectively landlocked country for access to a warm-water port.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 57, "text": "Historian Michael Khodarkovsky describes Tsarist Russia as a \"hybrid empire\" that combined elements of continental and colonial empires.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 58, "text": "While the British were consolidating their hold on India, Russian expansion had moved steadily eastward to the Pacific, then toward the Caucasus and Central Asia. In the early 19th century, it succeeded in conquering the South Caucasus and Dagestan from Qajar Iran following the Russo-Persian War (1804–1813), the Russo-Persian War (1826–1828) and the out coming treaties of Gulistan and Turkmenchay, giving Russia direct borders with both Persia's as well as Ottoman Turkey's heartlands. Later, they eventually reached the frontiers of Afghanistan as well (which had the largest foreign border adjacent to British holdings in India). In response to Russian expansion, the defense of India's land frontiers and the control of all sea approaches to the subcontinent via the Suez Canal, the Red Sea, and the Persian Gulf became preoccupations of British foreign policy in the 19th century. This was called the Great Game.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 59, "text": "According to Kazakh scholar Kereihan Amanzholov, Russian colonialism had \"no essential difference with the colonialist policies of Britain, France, and other European powers\".", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 60, "text": "Anglo-Russian rivalry in the Middle East and Central Asia led to a brief confrontation over Afghanistan in the 1870s. In Persian, both nations set up banks to extend their economic influence. The United Kingdom went so far as to invade Tibet, a land subordinate to the Chinese Qing Empire, in 1904, but withdrew when it became clear that Russian influence was insignificant and when Chinese and Tibetan resistance proved tougher than expected.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 61, "text": "Qing China defeated Russia in the early Sino-Russian border conflicts, although the Russian Empire later acquired Outer Manchuria in the Amur Annexation during the Second Opium War. During the Boxer Rebellion, the Russian Empire invaded Manchuria in 1900, and the Blagoveshchensk massacre occurred against Chinese residents on the Russian side of the border.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 62, "text": "In 1907, the United Kingdom and Russia signed an agreement that, on the surface, ended their rivalry in Central Asia. (see Anglo-Russian Convention) As part of the entente, Russia agreed to deal with the sovereign of Afghanistan only through British intermediaries. In turn, the United Kingdom would not annex or occupy Afghanistan. Chinese suzerainty over Tibet also was recognised by both Russia and the United Kingdom, since nominal control by a weak China was preferable to control by either power. Persia was divided into Russian and British spheres of influence and an intervening \"neutral\" zone. The United Kingdom and Russia chose to reach these uneasy compromises because of growing concern on the part of both powers over German expansion in strategic areas of China and Africa.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 63, "text": "Following the entente, Russia increasingly intervened in Persian domestic politics and suppressed nationalist movements that threatened both Saint Petersburg and London. After the Russian Revolution, Russia gave up its claim to a sphere of influence, though Soviet involvement persisted alongside the United Kingdom's until the 1940s.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 64, "text": "In the Middle East, in Persia and the Ottoman Empire, a German company built a railroad from Constantinople to Baghdad and the Persian Gulf in the latter, while it built a railroad from the north of the country to the south, connecting the Caucasus with the Persian Gulf in the former. Germany wanted to gain economic influence in the region and then, perhaps, move on to India. This was met with bitter resistance by the United Kingdom, Russia, and France who divided the region among themselves.", "title": "Russia and the \"Great Game\"" }, { "paragraph_id": 65, "text": "The 16th century brought many Jesuit missionaries to China, such as Matteo Ricci, who established missions where Western science was introduced, and where Europeans gathered knowledge of Chinese society, history, culture, and science. During the 18th century, merchants from Western Europe came to China in increasing numbers. However, merchants were confined to Guangzhou and the Portuguese colony of Macau, as they had been since the 16th century. European traders were increasingly irritated by what they saw as the relatively high customs duties they had to pay and by the attempts to curb the growing import trade in opium. By 1800, its importation was forbidden by the imperial government. However, the opium trade continued to boom.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 66, "text": "Early in the 19th century, serious internal weaknesses developed in the Qing dynasty that left China vulnerable to Western, Meiji period Japanese, and Russian imperialism. In 1839, China found itself fighting the First Opium War with Britain. China was defeated, and in 1842, signed the provisions of the Treaty of Nanking which were first of the unequal treaties signed during the Qing dynasty. Hong Kong Island was ceded to Britain, and certain ports, including Shanghai and Guangzhou, were opened to British trade and residence. In 1856, the Second Opium War broke out. The Chinese were again defeated, and now forced to the terms of the 1858 Treaty of Tientsin. The treaty opened new ports to trade and allowed foreigners to travel in the interior. In addition, Christians gained the right to propagate their religion. The United States Treaty of Wanghia and Russia later obtained the same prerogatives in separate treaties.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 67, "text": "Toward the end of the 19th century, China appeared on the way to territorial dismemberment and economic vassalage—the fate of India's rulers that played out much earlier. Several provisions of these treaties caused long-standing bitterness and humiliation among the Chinese: extraterritoriality (meaning that in a dispute with a Chinese person, a Westerner had the right to be tried in a court under the laws of his own country), customs regulation, and the right to station foreign warships in Chinese waters, including its navigable rivers.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 68, "text": "Jane E. Elliott criticized the allegation that China refused to modernize or was unable to defeat Western armies as simplistic, noting that China embarked on a massive military modernization in the late 1800s after several defeats, buying weapons from Western countries and manufacturing their own at arsenals, such as the Hanyang Arsenal during the Boxer Rebellion. In addition, Elliott questioned the claim that Chinese society was traumatized by the Western victories, as many Chinese peasants (90% of the population at that time) living outside the concessions continued about their daily lives, uninterrupted and without any feeling of \"humiliation\".", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 69, "text": "Historians have judged the Qing dynasty's vulnerability and weakness to foreign imperialism in the 19th century to be based mainly on its maritime naval weakness while it achieved military success against westerners on land, the historian Edward L. Dreyer said that \"China’s nineteenth-century humiliations were strongly related to her weakness and failure at sea. At the start of the Opium War, China had no unified navy and no sense of how vulnerable she was to attack from the sea; British forces sailed and steamed wherever they wanted to go......In the Arrow War (1856-1860), the Chinese had no way to prevent the Anglo-French expedition of 1860 from sailing into the Gulf of Zhili and landing as near as possible to Beijing. Meanwhile, new but not exactly modern Chinese armies suppressed the midcentury rebellions, bluffed Russia into a peaceful settlement of disputed frontiers in Central Asia, and defeated the French forces on land in the Sino-French War (1884-1885). But the defeat of the fleet, and the resulting threat to steamship traffic to Taiwan, forced China to conclude peace on unfavorable terms.\"", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 70, "text": "During the Sino-French War, Vietnamese forces defeated the French at the Battle of Cầu Giấy (Paper Bridge), Bắc Lệ ambush, Battle of Phu Lam Tao, Battle of Zhenhai, the Battle of Tamsui in the Keelung Campaign and in the last battle which ended the war, the Battle of Bang Bo (Zhennan Pass), which triggered the French Retreat from Lạng Sơn and resulted in the collapse of the French Jules Ferry government in the Tonkin Affair.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 71, "text": "The Qing dynasty forced Russia to hand over disputed territory in Ili in the Treaty of Saint Petersburg (1881), in what was widely seen by the west as a diplomatic victory for the Qing. Russia acknowledged that Qing China potentially posed a serious military threat. Mass media in the west during this era portrayed China as a rising military power due to its modernization programs and as a major threat to the western world, invoking fears that China would successfully conquer western colonies like Australia.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 72, "text": "The British observer Demetrius Charles de Kavanagh Boulger suggested a British-Chinese alliance to check Russian expansion in Central Asia.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 73, "text": "During the Ili crisis when Qing China threatened to go to war against Russia over the Russian occupation of Ili, the British officer Charles George Gordon was sent to China by Britain to advise China on military options against Russia should a potential war break out between China and Russia.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 74, "text": "The Russians observed the Chinese building up their arsenal of modern weapons during the Ili crisis, the Chinese bought thousands of rifles from Germany. In 1880, massive amounts of military equipment and rifles were shipped via boats to China from Antwerp as China purchased torpedoes, artillery, and 260,260 modern rifles from Europe.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 75, "text": "The Russian military observer D. V. Putiatia visited China in 1888 and found that in Northeastern China (Manchuria) along the Chinese-Russian border, the Chinese soldiers were potentially able to become adept at \"European tactics\" under certain circumstances, and the Chinese soldiers were armed with modern weapons like Krupp artillery, Winchester carbines, and Mauser rifles.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 76, "text": "Compared to Russian controlled areas, more benefits were given to the Muslim Kirghiz on the Chinese controlled areas. Russian settlers fought against the Muslim nomadic Kirghiz, which led the Russians to believe that the Kirghiz would be a liability in any conflict against China. The Muslim Kirghiz were sure that in an upcoming war, that China would defeat Russia.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 77, "text": "Russian sinologists, the Russian media, threat of internal rebellion, the pariah status inflicted by the Congress of Berlin, the negative state of the Russian economy all led Russia to concede and negotiate with China in St Petersburg, and return most of Ili to China.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 78, "text": "The rise of Japan since the Meiji Restoration as an imperial power led to further subjugation of China. In a dispute over China's longstanding claim of suzerainty in Korea, war broke out between China and Japan, resulting in humiliating defeat for the Chinese. By the Treaty of Shimonoseki (1895), China was forced to recognize effective Japanese rule of Korea and Taiwan was ceded to Japan until its recovery in 1945 at the end of the WWII by the Republic of China.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 79, "text": "China's defeat at the hands of Japan was another trigger for future aggressive actions by Western powers. In 1897, Germany demanded and was given a set of exclusive mining and railroad rights in Shandong province. Russia obtained access to Dairen and Port Arthur and the right to build a railroad across Manchuria, thereby achieving complete domination over a large portion of northwestern China. The United Kingdom and France also received a number of concessions. At this time, much of China was divided up into \"spheres of influence\": Germany had influence in Jiaozhou (Kiaochow) Bay, Shandong, and the Yellow River valley; Russia had influence in the Liaodong Peninsula and Manchuria; the United Kingdom had influence in Weihaiwei and the Yangtze Valley; and France had influence in the Guangzhou Bay and the provinces of Yunnan, Guizhou and Guangxi", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 80, "text": "China continued to be divided up into these spheres until the United States, which had no sphere of influence, grew alarmed at the possibility of its businessmen being excluded from Chinese markets. In 1899, Secretary of State John Hay asked the major powers to agree to a policy of equal trading privileges. In 1900, several powers agreed to the U.S.-backed scheme, giving rise to the \"Open Door\" policy, denoting freedom of commercial access and non-annexation of Chinese territory. In any event, it was in the European powers' interest to have a weak but independent Chinese government. The privileges of the Europeans in China were guaranteed in the form of treaties with the Qing government. In the event that the Qing government totally collapsed, each power risked losing the privileges that it already had negotiated.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 81, "text": "The erosion of Chinese sovereignty and seizures of land from Chinese by foreigners contributed to a spectacular anti-foreign outbreak in June 1900, when the \"Boxers\" (properly the society of the \"righteous and harmonious fists\") attacked foreigners around Beijing. The Imperial Court was divided into anti-foreign and pro-foreign factions, with the pro-foreign faction led by Ronglu and Prince Qing hampering any military effort by the anti-foreign faction led by Prince Duan and Dong Fuxiang. The Qing Empress Dowager ordered all diplomatic ties to be cut off and all foreigners to leave the legations in Beijing to go to Tianjin. The foreigners refused to leave. Fueled by entirely false reports that the foreigners in the legations were massacred, the Eight-Nation Alliance decided to launch an expedition on Beijing to reach the legations but they underestimated the Qing military. The Qing and Boxers defeated the foreigners at the Seymour Expedition, forcing them to turn back at the Battle of Langfang. In response to the foreign attack on the Taku Forts the Qing responded by declaring war against the foreigners. the Qing forces and foreigners fought a fierce battle at the Battle of Tientsin before the foreigners could launch a second expedition. On their second try Gaselee Expedition, with a much larger force, the foreigners managed to reach Beijing and fight the Battle of Peking. British and French forces looted, plundered and burned the Old Summer Palace to the ground for the second time (the first time being in 1860, following the Second Opium War). German forces were particularly severe in exacting revenge for the killing of their ambassador due to the orders of Kaiser Wilhelm II, who held anti-Asian sentiments, while Russia tightened its hold on Manchuria in the northeast until its crushing defeat by Japan in the war of 1904–1905. The Qing court evacuated to Xi'an and threatened to continue the war against foreigners, until the foreigners tempered their demands in the Boxer Protocol, promising that China would not have to give up any land and gave up the demands for the execution of Dong Fuxiang and Prince Duan.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 82, "text": "The correspondent Douglas Story observed Chinese troops in 1907 and praised their abilities and military skill.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 83, "text": "Extraterritorial jurisdiction was abandoned by the United Kingdom and the United States in 1943. Chiang Kai-shek forced the French to hand over all their concessions back to China control after World War II. Foreign political control over leased parts of China ended with the incorporation of Hong Kong and the small Portuguese territory of Macau into the People's Republic of China in 1997 and 1999 respectively.", "title": "Western European and Russian intrusions into China" }, { "paragraph_id": 84, "text": "Some Americans in the 19th century advocated for the annexation of Taiwan from China. Taiwanese aborigines often attacked and massacred shipwrecked western sailors. In 1867, during the Rover incident, Taiwanese aborigines attacked shipwrecked American sailors, killing the entire crew. They subsequently defeated a retaliatory expedition by the American military and killed another American during the battle.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 85, "text": "As the United States emerged as a new imperial power in the Pacific and Asia, one of the two oldest Western imperialist powers in the regions, Spain, was finding it increasingly difficult to maintain control of territories it had held in the regions since the 16th century. In 1896, a widespread revolt against Spanish rule broke out in the Philippines. Meanwhile, the recent string of U.S. territorial gains in the Pacific posed an even greater threat to Spain's remaining colonial holdings.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 86, "text": "As the U.S. continued to expand its economic and military power in the Pacific, it declared war against Spain in 1898. During the Spanish–American War, U.S. Admiral Dewey destroyed the Spanish fleet at Manila and U.S. troops landed in the Philippines. Spain later agreed by treaty to cede the Philippines in Asia and Guam in the Pacific. In the Caribbean, Spain ceded Puerto Rico to the U.S. The war also marked the end of Spanish rule in Cuba, which was to be granted nominal independence but remained heavily influenced by the U.S. government and U.S. business interests. One year following its treaty with Spain, the U.S. occupied the small Pacific outpost of Wake Island.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 87, "text": "The Filipinos, who assisted U.S. troops in fighting the Spanish, wished to establish an independent state and, on June 12, 1898, declared independence from Spain. In 1899, fighting between the Filipino nationalists and the U.S. broke out; it took the U.S. almost fifteen years to fully subdue the insurgency. The U.S. sent 70,000 troops and suffered thousands of casualties. The Filipinos insurgents, however, suffered considerably higher casualties than the Americans. Most casualties in the war were civilians dying primarily from disease and famine.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 88, "text": "U.S. counter-insurgency operations in rural areas often included scorched earth tactics which involved burning down villages and concentrating civilians into camps known as \"protected zones\". The execution of U.S. soldiers taken prisoner by the Filipinos led to disproportionate reprisals by American forces.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 89, "text": "The Moro Muslims fought against the Americans in the Moro Rebellion.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 90, "text": "In 1914, Dean C. Worcester, U.S. Secretary of the Interior for the Philippines (1901–1913) described \"the regime of civilisation and improvement which started with American occupation and resulted in developing naked savages into cultivated and educated men\". Nevertheless, some Americans, such as Mark Twain, deeply opposed American involvement/imperialism in the Philippines, leading to the abandonment of attempts to construct a permanent U.S. naval base and using it as an entry point to the Chinese market. In 1916, Congress guaranteed the independence of the Philippines by 1945.", "title": "U.S. imperialism in Asia" }, { "paragraph_id": 91, "text": "World War I brought about the fall of several empires in Europe. This had repercussions around the world. The defeated Central Powers included Germany and the Turkish Ottoman Empire. Germany lost all of its colonies in Asia. German New Guinea, a part of Papua New Guinea, became administered by Australia. German possessions and concessions in China, including Qingdao, became the subject of a controversy during the Paris Peace Conference when the Beiyang government in China agreed to cede these interests to Japan, to the anger of many Chinese people. Although the Chinese diplomats refused to sign the agreement, these interests were ceded to Japan with the support of the United States and the United Kingdom.", "title": "World War I: changes in imperialism" }, { "paragraph_id": 92, "text": "Turkey gave up her provinces; Syria, Palestine, and Mesopotamia (now Iraq) came under French and British control as League of Nations Mandates. The discovery of petroleum first in Iran and then in the Arab lands in the interbellum provided a new focus for activity on the part of the United Kingdom, France, and the United States.", "title": "World War I: changes in imperialism" }, { "paragraph_id": 93, "text": "In 1641, all Westerners were thrown out of Japan. For the next two centuries, Japan was free from Western contact, except for at the port of Nagasaki, which Japan allowed Dutch merchant vessels to enter on a limited basis.", "title": "Japan" }, { "paragraph_id": 94, "text": "Japan's freedom from Western contact ended on 8 July 1853, when Commodore Matthew Perry of the U.S. Navy sailed a squadron of black-hulled warships into Edo (modern Tokyo) harbor. The Japanese told Perry to sail to Nagasaki but he refused. Perry sought to present a letter from U.S. President Millard Fillmore to the emperor which demanded concessions from Japan. Japanese authorities responded by stating that they could not present the letter directly to the emperor, but scheduled a meeting on 14 July with a representative of the emperor. On 14 July, the squadron sailed towards the shore, giving a demonstration of their cannon's firepower thirteen times. Perry landed with a large detachment of Marines and presented the emperor's representative with Fillmore's letter. Perry said he would return, and did so, this time with even more war ships. The U.S. show of force led to Japan's concession to the Convention of Kanagawa on 31 March 1854. This treaty conferred extraterritoriality on American nationals, as well as, opening up further treaty ports beyond Nagasaki. This treaty was followed up by similar treaties with the United Kingdom, the Netherlands, Russia and France. These events made Japanese authorities aware that the country was lacking technologically and needed the strength of industrialism in order to keep their power. This realisation eventually led to a civil war and political reform known the Meiji Restoration.", "title": "Japan" }, { "paragraph_id": 95, "text": "The Meiji Restoration of 1868 led to administrative overhaul, deflation and subsequent rapid economic development. Japan had limited natural resources of her own and sought both overseas markets and sources of raw materials, fuelling a drive for imperial conquest which began with the defeat of China in 1895.", "title": "Japan" }, { "paragraph_id": 96, "text": "Taiwan, ceded by Qing dynasty China, became the first Japanese colony. In 1899, Japan won agreements from the great powers' to abandon extraterritoriality for their citizens, and an alliance with the United Kingdom established it in 1902 as an international power. Its spectacular defeat of Russia's navy in 1905 gave it the southern half of the island of Sakhalin; exclusive Japanese influence over Korea (propinquity); the former Russian lease of the Liaodong Peninsula with Port Arthur (Lüshunkou); and extensive rights in Manchuria (see the Russo-Japanese War).", "title": "Japan" }, { "paragraph_id": 97, "text": "The Empire of Japan and the Joseon dynasty in Korea formed bilateral diplomatic relations in 1876. China lost its suzerainty of Korea after defeat in the Sino-Japanese War in 1894. Russia also lost influence on the Korean peninsula with the Treaty of Portsmouth as a result of the Russo-Japanese war in 1904. The Joseon dynasty became increasingly dependent on Japan. Korea became a protectorate of Japan with the Japan–Korea Treaty of 1905. Korea was then de jure annexed to Japan with the Japan–Korea Treaty of 1910.", "title": "Japan" }, { "paragraph_id": 98, "text": "Japan was now one of the most powerful forces in the Far East, and in 1914, it entered World War I on the side of the Allies, seizing German-occupied Kiaochow and subsequently demanding Chinese acceptance of Japanese political influence and territorial acquisitions (Twenty-One Demands, 1915). Mass protests in Peking in 1919 which sparked widespread Chinese nationalism, coupled with Allied (and particularly U.S.) opinion led to Japan's abandonment of most of the demands and Kiaochow's 1922 return to China. Japan received the German territory from the Treaty of Versailles.", "title": "Japan" }, { "paragraph_id": 99, "text": "Tensions with China increased over the 1920s, and in 1931 Japanese Kwantung Army based in Manchuria seized control of the region without admission from Tokyo. Intermittent conflict with China led to full-scale war in mid-1937, drawing Japan toward an overambitious bid for Asian hegemony (Greater East Asia Co-Prosperity Sphere), which ultimately led to defeat and the loss of all its overseas territories after World War II (see Japanese expansionism and Japanese nationalism).", "title": "Japan" }, { "paragraph_id": 100, "text": "In the aftermath of World War II, European colonies, controlling more than one billion people throughout the world, still ruled most of the Middle East, South East Asia, and the Indian Subcontinent. However, the image of European pre-eminence was shattered by the wartime Japanese occupations of large portions of British, French, and Dutch territories in the Pacific. The destabilisation of European rule led to the rapid growth of nationalist movements in Asia—especially in Indonesia, Malaya, Burma, and French Indochina (Vietnam, Cambodia, and Laos).", "title": "After World War II" }, { "paragraph_id": 101, "text": "The war, however, only accelerated forces already in existence undermining Western imperialism in Asia. Throughout the colonial world, the processes of urbanisation and capitalist investment created professional merchant classes that emerged as new Westernised elites. While imbued with Western political and economic ideas, these classes increasingly grew to resent their unequal status under European rule.", "title": "After World War II" }, { "paragraph_id": 102, "text": "In India, the westward movement of Japanese forces towards Bengal during World War II had led to major concessions on the part of British authorities to Indian nationalist leaders. In 1947, the United Kingdom, devastated by war and embroiled in an economic crisis at home, granted British India its independence as two nations: India and Pakistan. Myanmar (Burma) and Sri Lanka (Ceylon), which is also part of British India, also gained their independence from the United Kingdom the following year, in 1948. In the Middle East, the United Kingdom granted independence to Jordan in 1946 and two years later, in 1948, ended its mandate of Palestine becoming the independent nation of Israel.", "title": "After World War II" }, { "paragraph_id": 103, "text": "Following the end of the war, nationalists in Indonesia demanded complete independence from the Netherlands. A brutal conflict ensued, and finally, in 1949, through United Nations mediation, the Dutch East Indies achieved independence, becoming the new nation of Indonesia. Dutch imperialism moulded this new multi-ethnic state comprising roughly 3,000 islands of the Indonesian archipelago with a population at the time of over 100 million.", "title": "After World War II" }, { "paragraph_id": 104, "text": "The end of Dutch rule opened up latent tensions between the roughly 300 distinct ethnic groups of the islands, with the major ethnic fault line being between the Javanese and the non-Javanese.", "title": "After World War II" }, { "paragraph_id": 105, "text": "Dutch New Guinea was under the Dutch administration until 1962 (see also West New Guinea dispute).", "title": "After World War II" }, { "paragraph_id": 106, "text": "In the Philippines, the U.S. remained committed to its previous pledges to grant the islands their independence, and the Philippines became the first of the Western-controlled Asian colonies to be granted independence post-World War II. However, the Philippines remained under pressure to adopt a political and economic system similar to the U.S.", "title": "After World War II" }, { "paragraph_id": 107, "text": "This aim was greatly complicated by the rise of new political forces. During the war, the Hukbalahap (People's Army), which had strong ties to the Communist Party of the Philippines (PKP), fought against the Japanese occupation of the Philippines and won strong popularity among many sectors of the Filipino working class and peasantry. In 1946, the PKP participated in elections as part of the Democratic Alliance. However, with the onset of the Cold War, its growing political strength drew a reaction from the ruling government and the United States, resulting in the repression of the PKP and its associated organizations. In 1948, the PKP began organizing an armed struggle against the government and continued U.S. military presence. In 1950, the PKP created the People's Liberation Army (Hukbong Mapagpalaya ng Bayan), which mobilized thousands of troops throughout the islands. The insurgency lasted until 1956 when the PKP gave up armed struggle.", "title": "After World War II" }, { "paragraph_id": 108, "text": "In 1968, the PKP underwent a split, and in 1969 the Maoist faction of the PKP created the New People's Army. Maoist rebels re-launched an armed struggle against the government and the U.S. military presence in the Philippines, which continues to this day.", "title": "After World War II" }, { "paragraph_id": 109, "text": "France remained determined to retain its control of Indochina. However, in Hanoi, in 1945, a broad front of nationalists and communists led by Ho Chi Minh declared an independent Democratic Republic of Vietnam, commonly referred to as the Viet Minh regime by Western outsiders. France, seeking to regain control of Vietnam, countered with a vague offer of self-government under French rule. France's offers were unacceptable to Vietnamese nationalists; and in December 1946 the Việt Minh launched a rebellion against the French authority governing the colonies of French Indochina. The first few years of the war involved a low-level rural insurgency against French authority. However, after the Chinese communists reached the Northern border of Vietnam in 1949, the conflict turned into a conventional war between two armies equipped with modern weapons supplied by the United States and the Soviet Union. Meanwhile, the France granted the State of Vietnam based in Saigon independence in 1949 while Laos and Cambodia received independence in 1953. The US recognized the regime in Saigon, and provided the French military effort with military aid.", "title": "After World War II" }, { "paragraph_id": 110, "text": "Meanwhile, in Vietnam, the French war against the Viet Minh continued for nearly eight years. The French were gradually worn down by guerrilla and jungle fighting. The turning point for France occurred at Dien Bien Phu in 1954, which resulted in the surrender of ten thousand French troops. Paris was forced to accept a political settlement that year at the Geneva Conference, which led to a precarious set of agreements regarding the future political status of Laos, Cambodia, and Vietnam.", "title": "After World War II" }, { "paragraph_id": 111, "text": "British colonies in East Asia, South Asia, and Southeast Asia:", "title": "List of European colonies in Asia" }, { "paragraph_id": 112, "text": "French colonies in South and Southeast Asia:", "title": "List of European colonies in Asia" }, { "paragraph_id": 113, "text": "Dutch, British, Spanish, Portuguese colonies and Russian territories in Asia:", "title": "List of European colonies in Asia" } ]
The influence and imperialism of Western Europe and associated states peaked in Asian territories from the colonial period beginning in the 16th century and substantially reducing with 20th century decolonization. It originated in the 15th-century search for trade routes to the Indian subcontinent and Southeast Asia that led directly to the Age of Discovery, and additionally the introduction of early modern warfare into what Europeans first called the East Indies and later the Far East. By the early 16th century, the Age of Sail greatly expanded Western European influence and development of the spice trade under colonialism. European-style colonial empires and imperialism operated in Asia throughout six centuries of colonialism, formally ending with the independence of the Portuguese Empire's last colony Macau in 1999. The empires introduced Western concepts of nation and the multinational state. This article attempts to outline the consequent development of the Western concept of the nation state. European political power, commerce, and culture in Asia gave rise to growing trade in commodities—a key development in the rise of today's modern world free market economy. In the 16th century, the Portuguese broke the (overland) monopoly of the Arabs and Italians in trade between Asia and Europe by the discovery of the sea route to India around the Cape of Good Hope. The ensuing rise of the rival Dutch East India Company gradually eclipsed Portuguese influence in Asia. Dutch forces first established independent bases in the East and then between 1640 and 1660 wrested Malacca, Ceylon, some southern Indian ports, and the lucrative Japan trade from the Portuguese. Later, the English and the French established settlements in India and trade with China and their acquisitions would gradually surpass those of the Dutch. Following the end of the Seven Years' War in 1763, the British eliminated French influence in India and established the British East India Company as the most important political force on the Indian subcontinent. Before the Industrial Revolution in the mid-to-late 19th century, demand for oriental goods such as porcelain, silk, spices, and tea remained the driving force behind European imperialism. The Western European stake in Asia remained confined largely to trading stations and strategic outposts necessary to protect trade. Industrialization, however, dramatically increased European demand for Asian raw materials; with the severe Long Depression of the 1870s provoking a scramble for new markets for European industrial products and financial services in Africa, the Americas, Eastern Europe, and especially in Asia. This scramble coincided with a new era in global colonial expansion known as "the New Imperialism", which saw a shift in focus from trade and indirect rule to formal colonial control of vast overseas territories ruled as political extensions of their mother countries. Between the 1870s and the beginning of World War I in 1914, the United Kingdom, France, and the Netherlands—the established colonial powers in Asia—added to their empires vast expanses of territory in the Middle East, the Indian Subcontinent, and Southeast Asia. In the same period, the Empire of Japan, following the Meiji Restoration; the German Empire, following the end of the Franco-Prussian War in 1871; Tsarist Russia; and the United States, following the Spanish–American War in 1898, quickly emerged as new imperial powers in East Asia and in the Pacific Ocean area. In Asia, World War I and World War II were played out as struggles among several key imperial power, with conflicts involving the European powers along with Russia and the rising American and Japanese. None of the colonial powers, however, possessed the resources to withstand the strains of both World Wars and maintain their direct rule in Asia. Although nationalist movements throughout the colonial world led to the political independence of nearly all of Asia's remaining colonies, decolonization was intercepted by the Cold War. South East Asia, South Asia, the Middle East, and East Asia remained embedded in a world economic, financial, and military system in which the great powers compete to extend their influence. However, the rapid post-war economic development and rise of the industrialized developed countries of Taiwan, Singapore, South Korea, Japan and the developing countries of India, the People's Republic of China and its autonomous territory of Hong Kong, along with the collapse of the Soviet Union, have greatly diminished Western European influence in Asia. The United States remains influential with trade and military bases in Asia.
2001-12-24T00:58:28Z
2023-12-30T23:23:53Z
[ "Template:Asia topics", "Template:Reflist", "Template:Cite thesis", "Template:Refend", "Template:Commons category", "Template:Multiple issues", "Template:Which", "Template:Multiple image", "Template:NoteFoot", "Template:Cite journal", "Template:ISBN", "Template:Refbegin", "Template:Dead link", "Template:'", "Template:Flagicon", "Template:Cite book", "Template:Webarchive", "Template:Cn", "Template:More citations needed", "Template:Cite news", "Template:Cite web", "Template:Short description", "Template:Main", "Template:Flagicon image", "Template:Great Power diplomacy", "Template:New Imperialism", "Template:Sfn", "Template:Further", "Template:See also", "Template:NoteTag", "Template:Sfnp", "Template:Citation" ]
https://en.wikipedia.org/wiki/Western_imperialism_in_Asia
15,445
Entropy (information theory)
In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X {\displaystyle X} , which takes values in the alphabet X {\displaystyle {\mathcal {X}}} and is distributed according to p : X → [ 0 , 1 ] {\displaystyle p\colon {\mathcal {X}}\to [0,1]} : where Σ {\displaystyle \Sigma } denotes the sum over the variable's possible values. The choice of base for log {\displaystyle \log } , the logarithm, varies for different applications. Base 2 gives the unit of bits (or "shannons"), while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his famous source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem. Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition E [ − log p ( X ) ] {\displaystyle \mathbb {E} [-\log p(X)]} generalizes the above. The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the outcome of a very low probability event. The information content, also called the surprisal or self-information, of an event E {\displaystyle E} is a function which increases as the probability p ( E ) {\displaystyle p(E)} of an event decreases. When p ( E ) {\displaystyle p(E)} is close to 1, the surprisal of the event is low, but if p ( E ) {\displaystyle p(E)} is close to 0, the surprisal of the event is high. This relationship is described by the function where log {\displaystyle \log } is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section § Characterization. Hence, we can define the information, or surprisal, of an event E {\displaystyle E} by or equivalently, Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability ( p = 1 / 6 {\displaystyle p=1/6} ) than each outcome of a coin toss ( p = 1 / 2 {\displaystyle p=1/2} ). Consider a coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is when p = 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit. (Similarly, one trit with equiprobable values contains log 2 3 {\displaystyle \log _{2}3} (about 1.58496) bits of information because it can have one of three values.) The minimum surprise is when p = 0 or p = 1, when the event outcome is known ahead of time, and the entropy is zero bits. When the entropy is zero bits, this is sometimes referred to as unity, where there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits. Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one can not do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and D as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message. Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable X {\textstyle X} , which takes values in the alphabet X {\displaystyle {\mathcal {X}}} and is distributed according to p : X → [ 0 , 1 ] {\displaystyle p:{\mathcal {X}}\to [0,1]} such that p ( x ) := P [ X = x ] {\displaystyle p(x):=\mathbb {P} [X=x]} : Here E {\displaystyle \mathbb {E} } is the expected value operator, and I is the information content of X. I ( X ) {\displaystyle \operatorname {I} (X)} is itself a random variable. The entropy can explicitly be written as: where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10. In the case of p ( x ) = 0 {\displaystyle p(x)=0} for some x ∈ X {\displaystyle x\in {\mathcal {X}}} , the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit: One may also define the conditional entropy of two variables X {\displaystyle X} and Y {\displaystyle Y} taking values from sets X {\displaystyle {\mathcal {X}}} and Y {\displaystyle {\mathcal {Y}}} respectively, as: where p X , Y ( x , y ) := P [ X = x , Y = y ] {\displaystyle p_{X,Y}(x,y):=\mathbb {P} [X=x,Y=y]} and p Y ( y ) = P [ Y = y ] {\displaystyle p_{Y}(y)=\mathbb {P} [Y=y]} . This quantity should be understood as the remaining randomness in the random variable X {\displaystyle X} given the random variable Y {\displaystyle Y} . Entropy can be formally defined in the language of measure theory as follows: Let ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} be a probability space. Let A ∈ Σ {\displaystyle A\in \Sigma } be an event. The surprisal of A {\displaystyle A} is The expected surprisal of A {\displaystyle A} is A μ {\displaystyle \mu } -almost partition is a set family P ⊆ P ( X ) {\displaystyle P\subseteq {\mathcal {P}}(X)} such that μ ( ∪ P ) = 1 {\displaystyle \mu (\mathop {\cup } P)=1} and μ ( A ∩ B ) = 0 {\displaystyle \mu (A\cap B)=0} for all distinct A , B ∈ P {\displaystyle A,B\in P} . (This is a relaxation of the usual conditions for a partition.) The entropy of P {\displaystyle P} is Let M {\displaystyle M} be a sigma-algebra on X {\displaystyle X} . The entropy of M {\displaystyle M} is Finally, the entropy of the probability space is H μ ( Σ ) {\displaystyle \mathrm {H} _{\mu }(\Sigma )} , that is, the entropy with respect to μ {\displaystyle \mu } of the sigma-algebra of all measurable subsets of X {\displaystyle X} . Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modelled as a Bernoulli process. The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p = 0.7, then Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain. To understand the meaning of −Σ pi log(pi), first define an information function I in terms of an event i with probability pi. The amount of information acquired due to the observation of event i follows from Shannon's solution of the fundamental properties of information: Given two independent events, if the first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn equiprobable outcomes of the joint event. This means that if log2(n) bits are needed to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both. Shannon discovered that a suitable choice of I {\displaystyle \operatorname {I} } is given by: In fact, the only possible values of I {\displaystyle \operatorname {I} } are I ( u ) = k log u {\displaystyle \operatorname {I} (u)=k\log u} for k < 0 {\displaystyle k<0} . Additionally, choosing a value for k is equivalent to choosing a value x > 1 {\displaystyle x>1} for k = − 1 / log x {\displaystyle k=-1/\log x} , so that x corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties. The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) = 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, n tosses provide n bits of information, which is approximately 0.693n nats or 0.301n decimal digits. The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves. Another characterization of entropy uses the following properties. We denote pi = Pr(X = xi) and Ηn(p1, ..., pn) = Η(X). The rule of additivity has the following consequences: for positive integers bi where b1 + ... + bk = n, Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source alphabet with n symbols can be defined simply as being equal to its n-ary entropy. See also Redundancy (information theory). The characterization here imposes an additive property with respect to a partition of a set. Meanwhile the conditional probability is defined in terms of a multiplicative property, P ( A ∣ B ) ⋅ P ( B ) = P ( A ∩ B ) {\displaystyle P(A\mid B)\cdot P(B)=P(A\cap B)} . Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals μ ( A ) ⋅ ln μ ( A ) {\displaystyle \mu (A)\cdot \ln \mu (A)} for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, log 2 {\displaystyle \log _{2}} lends itself to practical interpretations. Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a "logic of partitions" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as "dits" (distinctions), a measure on partitions. "Dits" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on. Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties: It was shown that any function H {\displaystyle \mathrm {H} } satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector p 1 , … , p n {\displaystyle p_{1},\ldots ,p_{n}} . It is worth noting that if we drop the "small for small probabilities" property, then H {\displaystyle \mathrm {H} } must be a non-negative linear combination of the Shannon entropy and the Hartley entropy. The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable X: The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics. In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Boltzmann (1872). The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927: where ρ is the density matrix of the quantum mechanical system and Tr is the trace. At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy. The connection between thermodynamics and what is now known as information theory was first made by Ludwig Boltzmann and expressed by his famous equation: where S {\displaystyle S} is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and kB is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is pi = 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently kB times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of "missing" information needed to determine a microstate, given the macrostate. In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient. Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text. If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger. A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources. The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunication networks. Entropy is one of several ways to measure biodiversity, and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types. There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message: (The "rate of self-information" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information. It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the "entropy" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way. If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are N published books, and each book is only published once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest. The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2) for n = 3, 4, 5, ..., F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence. In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) 2 127 {\displaystyle 2^{127}} guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack. Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all. A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is: where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is: where i is a state (certain preceding characters) and p i ( j ) {\displaystyle p_{i}(j)} is the probability of j given i as the previous character. For a second order Markov source, the entropy rate is A source alphabet with non-uniform distribution will have less entropy than if those symbols had uniform distribution (i.e. the "optimized alphabet"). This deficiency in entropy can be expressed as a ratio called efficiency: Applying the basic properties of the logarithm, this quantity can also be expressed as: Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy log b ( n ) {\displaystyle {\log _{b}(n)}} . Furthermore, the efficiency is indifferent to choice of (positive) base b, as indicated by the insensitivity within the final logarithm above thereto. The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function f(x) with finite or infinite support X {\displaystyle \mathbb {X} } on the real line is defined by analogy, using the above form of the entropy as an expectation: This is the differential entropy (or continuous entropy). A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann. Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points. To answer this question, a connection must be established between the two functions: In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As the continuous domain is generalized, the width must be made explicit. To do this, start with a continuous function f discretized into bins of size Δ {\displaystyle \Delta } . By the mean-value theorem there exists a value xi in each bin such that the integral of the function f can be approximated (in the Riemannian sense) by where this limit and "bin size goes to zero" are equivalent. We will denote and expanding the logarithm, we have As Δ → 0, we have Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy: which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension). It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some "standard" value of x (i.e. "bin size") and therefore has the same units, then a modified differential entropy may be written in proper form as: and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy as N → ∞ {\displaystyle N\rightarrow \infty } would also include a term of log ( N ) {\displaystyle \log(N)} , which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme. Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the "reference" measure m. Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem. Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) XH = λ ( n + H ) {\displaystyle \lambda (n+H)} . And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of XH' could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem. The proof is quite involved and it bought together breakthroughs not just in novel use of Shannon Entropy, but also its used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the "parity barrier" for this specific problem. While the use of Shannon Entropy in the proof is novel it is likely to open new research in this direction. Entropy has become a useful quantity in combinatorics. A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset A ⊆ Z, we have where Pi is the orthogonal projection in the ith coordinate: The proof follows as a simple corollary of Shearer's inequality: if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, ..., d} such that every integer between 1 and d lies in exactly r of these subsets, then where ( X j ) j ∈ S i {\displaystyle (X_{j})_{j\in S_{i}}} is the Cartesian product of random variables Xj with indexes j in Si (so the dimension of this vector is equal to the size of Si). We sketch how Loomis–Whitney follows from this: Indeed, let X be a uniformly distributed random variable with values in A and so that each point in A occurs with equal probability. Then (by the further properties of entropy mentioned above) Η(X) = log|A|, where |A| denotes the cardinality of A. Let Si = {1, 2, ..., i−1, i+1, ..., d}. The range of ( X j ) j ∈ S i {\displaystyle (X_{j})_{j\in S_{i}}} is contained in Pi(A) and hence H [ ( X j ) j ∈ S i ] ≤ log | P i ( A ) | {\displaystyle \mathrm {H} [(X_{j})_{j\in S_{i}}]\leq \log |P_{i}(A)|} . Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain. For integers 0 < k < n let q = k/n. Then where A nice interpretation of this is that the number of binary strings of length n with exactly k many 1's is approximately 2 n H ( k / n ) {\displaystyle 2^{n\mathrm {H} (k/n)}} . Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty. Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees I G ( Y , X ) {\displaystyle IG(Y,X)} , which is equal to the difference between the entropy of Y {\displaystyle Y} and the conditional entropy of Y {\displaystyle Y} given X {\displaystyle X} , quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute X {\displaystyle X} . The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally. Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior. Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy). This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
[ { "paragraph_id": 0, "text": "In information theory, the entropy of a random variable is the average level of \"information\", \"surprise\", or \"uncertainty\" inherent to the variable's possible outcomes. Given a discrete random variable X {\\displaystyle X} , which takes values in the alphabet X {\\displaystyle {\\mathcal {X}}} and is distributed according to p : X → [ 0 , 1 ] {\\displaystyle p\\colon {\\mathcal {X}}\\to [0,1]} :", "title": "" }, { "paragraph_id": 1, "text": "where Σ {\\displaystyle \\Sigma } denotes the sum over the variable's possible values. The choice of base for log {\\displaystyle \\log } , the logarithm, varies for different applications. Base 2 gives the unit of bits (or \"shannons\"), while base e gives \"natural units\" nat, and base 10 gives units of \"dits\", \"bans\", or \"hartleys\". An equivalent definition of entropy is the expected value of the self-information of a variable.", "title": "" }, { "paragraph_id": 2, "text": "The concept of information entropy was introduced by Claude Shannon in his 1948 paper \"A Mathematical Theory of Communication\", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The \"fundamental problem of communication\" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his famous source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem.", "title": "" }, { "paragraph_id": 3, "text": "Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition E [ − log p ( X ) ] {\\displaystyle \\mathbb {E} [-\\log p(X)]} generalizes the above.", "title": "" }, { "paragraph_id": 4, "text": "The core idea of information theory is that the \"informational value\" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the outcome of a very low probability event.", "title": "Introduction" }, { "paragraph_id": 5, "text": "The information content, also called the surprisal or self-information, of an event E {\\displaystyle E} is a function which increases as the probability p ( E ) {\\displaystyle p(E)} of an event decreases. When p ( E ) {\\displaystyle p(E)} is close to 1, the surprisal of the event is low, but if p ( E ) {\\displaystyle p(E)} is close to 0, the surprisal of the event is high. This relationship is described by the function", "title": "Introduction" }, { "paragraph_id": 6, "text": "where log {\\displaystyle \\log } is the logarithm, which gives 0 surprise when the probability of the event is 1. In fact, log is the only function that satisfies а specific set of conditions defined in section § Characterization.", "title": "Introduction" }, { "paragraph_id": 7, "text": "Hence, we can define the information, or surprisal, of an event E {\\displaystyle E} by", "title": "Introduction" }, { "paragraph_id": 8, "text": "or equivalently,", "title": "Introduction" }, { "paragraph_id": 9, "text": "Entropy measures the expected (i.e., average) amount of information conveyed by identifying the outcome of a random trial. This implies that rolling a die has higher entropy than tossing a coin because each outcome of a die toss has smaller probability ( p = 1 / 6 {\\displaystyle p=1/6} ) than each outcome of a coin toss ( p = 1 / 2 {\\displaystyle p=1/2} ).", "title": "Introduction" }, { "paragraph_id": 10, "text": "Consider a coin with probability p of landing on heads and probability 1 − p of landing on tails. The maximum surprise is when p = 1/2, for which one outcome is not expected over the other. In this case a coin flip has an entropy of one bit. (Similarly, one trit with equiprobable values contains log 2 3 {\\displaystyle \\log _{2}3} (about 1.58496) bits of information because it can have one of three values.) The minimum surprise is when p = 0 or p = 1, when the event outcome is known ahead of time, and the entropy is zero bits. When the entropy is zero bits, this is sometimes referred to as unity, where there is no uncertainty at all – no freedom of choice – no information. Other values of p give entropies between zero and one bits.", "title": "Introduction" }, { "paragraph_id": 11, "text": "Information theory is useful to calculate the smallest amount of information required to convey a message, as in data compression. For example, consider the transmission of sequences comprising the 4 characters 'A', 'B', 'C', and 'D' over a binary channel. If all 4 letters are equally likely (25%), one can not do better than using two bits to encode each letter. 'A' might code as '00', 'B' as '01', 'C' as '10', and 'D' as '11'. However, if the probabilities of each letter are unequal, say 'A' occurs with 70% probability, 'B' with 26%, and 'C' and 'D' with 2% each, one could assign variable length codes. In this case, 'A' would be coded as '0', 'B' as '10', 'C' as '110', and D as '111'. With this representation, 70% of the time only one bit needs to be sent, 26% of the time two bits, and only 4% of the time 3 bits. On average, fewer than 2 bits are required since the entropy is lower (owing to the high prevalence of 'A' followed by 'B' – together 96% of characters). The calculation of the sum of probability-weighted log probabilities measures and captures this effect. English text, treated as a string of characters, has fairly low entropy; i.e. it is fairly predictable. We can be fairly certain that, for example, 'e' will be far more common than 'z', that the combination 'qu' will be much more common than any other combination with a 'q' in it, and that the combination 'th' will be more common than 'z', 'q', or 'qu'. After the first few letters one can often guess the rest of the word. English text has between 0.6 and 1.3 bits of entropy per character of the message.", "title": "Introduction" }, { "paragraph_id": 12, "text": "Named after Boltzmann's Η-theorem, Shannon defined the entropy Η (Greek capital letter eta) of a discrete random variable X {\\textstyle X} , which takes values in the alphabet X {\\displaystyle {\\mathcal {X}}} and is distributed according to p : X → [ 0 , 1 ] {\\displaystyle p:{\\mathcal {X}}\\to [0,1]} such that p ( x ) := P [ X = x ] {\\displaystyle p(x):=\\mathbb {P} [X=x]} :", "title": "Definition" }, { "paragraph_id": 13, "text": "Here E {\\displaystyle \\mathbb {E} } is the expected value operator, and I is the information content of X. I ( X ) {\\displaystyle \\operatorname {I} (X)} is itself a random variable.", "title": "Definition" }, { "paragraph_id": 14, "text": "The entropy can explicitly be written as:", "title": "Definition" }, { "paragraph_id": 15, "text": "where b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the corresponding units of entropy are the bits for b = 2, nats for b = e, and bans for b = 10.", "title": "Definition" }, { "paragraph_id": 16, "text": "In the case of p ( x ) = 0 {\\displaystyle p(x)=0} for some x ∈ X {\\displaystyle x\\in {\\mathcal {X}}} , the value of the corresponding summand 0 logb(0) is taken to be 0, which is consistent with the limit:", "title": "Definition" }, { "paragraph_id": 17, "text": "One may also define the conditional entropy of two variables X {\\displaystyle X} and Y {\\displaystyle Y} taking values from sets X {\\displaystyle {\\mathcal {X}}} and Y {\\displaystyle {\\mathcal {Y}}} respectively, as:", "title": "Definition" }, { "paragraph_id": 18, "text": "where p X , Y ( x , y ) := P [ X = x , Y = y ] {\\displaystyle p_{X,Y}(x,y):=\\mathbb {P} [X=x,Y=y]} and p Y ( y ) = P [ Y = y ] {\\displaystyle p_{Y}(y)=\\mathbb {P} [Y=y]} . This quantity should be understood as the remaining randomness in the random variable X {\\displaystyle X} given the random variable Y {\\displaystyle Y} .", "title": "Definition" }, { "paragraph_id": 19, "text": "Entropy can be formally defined in the language of measure theory as follows: Let ( X , Σ , μ ) {\\displaystyle (X,\\Sigma ,\\mu )} be a probability space. Let A ∈ Σ {\\displaystyle A\\in \\Sigma } be an event. The surprisal of A {\\displaystyle A} is", "title": "Definition" }, { "paragraph_id": 20, "text": "The expected surprisal of A {\\displaystyle A} is", "title": "Definition" }, { "paragraph_id": 21, "text": "A μ {\\displaystyle \\mu } -almost partition is a set family P ⊆ P ( X ) {\\displaystyle P\\subseteq {\\mathcal {P}}(X)} such that μ ( ∪ P ) = 1 {\\displaystyle \\mu (\\mathop {\\cup } P)=1} and μ ( A ∩ B ) = 0 {\\displaystyle \\mu (A\\cap B)=0} for all distinct A , B ∈ P {\\displaystyle A,B\\in P} . (This is a relaxation of the usual conditions for a partition.) The entropy of P {\\displaystyle P} is", "title": "Definition" }, { "paragraph_id": 22, "text": "Let M {\\displaystyle M} be a sigma-algebra on X {\\displaystyle X} . The entropy of M {\\displaystyle M} is", "title": "Definition" }, { "paragraph_id": 23, "text": "Finally, the entropy of the probability space is H μ ( Σ ) {\\displaystyle \\mathrm {H} _{\\mu }(\\Sigma )} , that is, the entropy with respect to μ {\\displaystyle \\mu } of the sigma-algebra of all measurable subsets of X {\\displaystyle X} .", "title": "Definition" }, { "paragraph_id": 24, "text": "Consider tossing a coin with known, not necessarily fair, probabilities of coming up heads or tails; this can be modelled as a Bernoulli process.", "title": "Example" }, { "paragraph_id": 25, "text": "The entropy of the unknown result of the next toss of the coin is maximized if the coin is fair (that is, if heads and tails both have equal probability 1/2). This is the situation of maximum uncertainty as it is most difficult to predict the outcome of the next toss; the result of each toss of the coin delivers one full bit of information. This is because", "title": "Example" }, { "paragraph_id": 26, "text": "However, if we know the coin is not fair, but comes up heads or tails with probabilities p and q, where p ≠ q, then there is less uncertainty. Every time it is tossed, one side is more likely to come up than the other. The reduced uncertainty is quantified in a lower entropy: on average each toss of the coin delivers less than one full bit of information. For example, if p = 0.7, then", "title": "Example" }, { "paragraph_id": 27, "text": "Uniform probability yields maximum uncertainty and therefore maximum entropy. Entropy, then, can only decrease from the value associated with uniform probability. The extreme case is that of a double-headed coin that never comes up tails, or a double-tailed coin that never results in a head. Then there is no uncertainty. The entropy is zero: each toss of the coin delivers no new information as the outcome of each coin toss is always certain.", "title": "Example" }, { "paragraph_id": 28, "text": "To understand the meaning of −Σ pi log(pi), first define an information function I in terms of an event i with probability pi. The amount of information acquired due to the observation of event i follows from Shannon's solution of the fundamental properties of information:", "title": "Characterization" }, { "paragraph_id": 29, "text": "Given two independent events, if the first event can yield one of n equiprobable outcomes and another has one of m equiprobable outcomes then there are mn equiprobable outcomes of the joint event. This means that if log2(n) bits are needed to encode the first value and log2(m) to encode the second, one needs log2(mn) = log2(m) + log2(n) to encode both.", "title": "Characterization" }, { "paragraph_id": 30, "text": "Shannon discovered that a suitable choice of I {\\displaystyle \\operatorname {I} } is given by:", "title": "Characterization" }, { "paragraph_id": 31, "text": "In fact, the only possible values of I {\\displaystyle \\operatorname {I} } are I ( u ) = k log u {\\displaystyle \\operatorname {I} (u)=k\\log u} for k < 0 {\\displaystyle k<0} . Additionally, choosing a value for k is equivalent to choosing a value x > 1 {\\displaystyle x>1} for k = − 1 / log x {\\displaystyle k=-1/\\log x} , so that x corresponds to the base for the logarithm. Thus, entropy is characterized by the above four properties.", "title": "Characterization" }, { "paragraph_id": 32, "text": "The different units of information (bits for the binary logarithm log2, nats for the natural logarithm ln, bans for the decimal logarithm log10 and so on) are constant multiples of each other. For instance, in case of a fair coin toss, heads provides log2(2) = 1 bit of information, which is approximately 0.693 nats or 0.301 decimal digits. Because of additivity, n tosses provide n bits of information, which is approximately 0.693n nats or 0.301n decimal digits.", "title": "Characterization" }, { "paragraph_id": 33, "text": "The meaning of the events observed (the meaning of messages) does not matter in the definition of entropy. Entropy only takes into account the probability of observing a specific event, so the information it encapsulates is information about the underlying probability distribution, not the meaning of the events themselves.", "title": "Characterization" }, { "paragraph_id": 34, "text": "Another characterization of entropy uses the following properties. We denote pi = Pr(X = xi) and Ηn(p1, ..., pn) = Η(X).", "title": "Characterization" }, { "paragraph_id": 35, "text": "The rule of additivity has the following consequences: for positive integers bi where b1 + ... + bk = n,", "title": "Characterization" }, { "paragraph_id": 36, "text": "Choosing k = n, b1 = ... = bn = 1 this implies that the entropy of a certain outcome is zero: Η1(1) = 0. This implies that the efficiency of a source alphabet with n symbols can be defined simply as being equal to its n-ary entropy. See also Redundancy (information theory).", "title": "Characterization" }, { "paragraph_id": 37, "text": "The characterization here imposes an additive property with respect to a partition of a set. Meanwhile the conditional probability is defined in terms of a multiplicative property, P ( A ∣ B ) ⋅ P ( B ) = P ( A ∩ B ) {\\displaystyle P(A\\mid B)\\cdot P(B)=P(A\\cap B)} . Observe that a logarithm mediates between these two operations. The conditional entropy and related quantities inherit simple relation, in turn. The measure theoretic definition in the previous section defined the entropy as a sum over expected surprisals μ ( A ) ⋅ ln μ ( A ) {\\displaystyle \\mu (A)\\cdot \\ln \\mu (A)} for an extremal partition. Here the logarithm is ad hoc and the entropy is not a measure in itself. At least in the information theory of a binary string, log 2 {\\displaystyle \\log _{2}} lends itself to practical interpretations.", "title": "Characterization" }, { "paragraph_id": 38, "text": "Motivated by such relations, a plethora of related and competing quantities have been defined. For example, David Ellerman's analysis of a \"logic of partitions\" defines a competing measure in structures dual to that of subsets of a universal set. Information is quantified as \"dits\" (distinctions), a measure on partitions. \"Dits\" can be converted into Shannon's bits, to get the formulas for conditional entropy, and so on.", "title": "Characterization" }, { "paragraph_id": 39, "text": "Another succinct axiomatic characterization of Shannon entropy was given by Aczél, Forte and Ng, via the following properties:", "title": "Characterization" }, { "paragraph_id": 40, "text": "It was shown that any function H {\\displaystyle \\mathrm {H} } satisfying the above properties must be a constant multiple of Shannon entropy, with a non-negative constant. Compared to the previously mentioned characterizations of entropy, this characterization focuses on the properties of entropy as a function of random variables (subadditivity and additivity), rather than the properties of entropy as a function of the probability vector p 1 , … , p n {\\displaystyle p_{1},\\ldots ,p_{n}} .", "title": "Characterization" }, { "paragraph_id": 41, "text": "It is worth noting that if we drop the \"small for small probabilities\" property, then H {\\displaystyle \\mathrm {H} } must be a non-negative linear combination of the Shannon entropy and the Hartley entropy.", "title": "Characterization" }, { "paragraph_id": 42, "text": "The Shannon entropy satisfies the following properties, for some of which it is useful to interpret entropy as the expected amount of information learned (or uncertainty eliminated) by revealing the value of a random variable X:", "title": "Further properties" }, { "paragraph_id": 43, "text": "The inspiration for adopting the word entropy in information theory came from the close resemblance between Shannon's formula and very similar known formulae from statistical mechanics.", "title": "Aspects" }, { "paragraph_id": 44, "text": "In statistical thermodynamics the most general formula for the thermodynamic entropy S of a thermodynamic system is the Gibbs entropy", "title": "Aspects" }, { "paragraph_id": 45, "text": "where kB is the Boltzmann constant, and pi is the probability of a microstate. The Gibbs entropy was defined by J. Willard Gibbs in 1878 after earlier work by Boltzmann (1872).", "title": "Aspects" }, { "paragraph_id": 46, "text": "The Gibbs entropy translates over almost unchanged into the world of quantum physics to give the von Neumann entropy introduced by John von Neumann in 1927:", "title": "Aspects" }, { "paragraph_id": 47, "text": "where ρ is the density matrix of the quantum mechanical system and Tr is the trace.", "title": "Aspects" }, { "paragraph_id": 48, "text": "At an everyday practical level, the links between information entropy and thermodynamic entropy are not evident. Physicists and chemists are apt to be more interested in changes in entropy as a system spontaneously evolves away from its initial conditions, in accordance with the second law of thermodynamics, rather than an unchanging probability distribution. As the minuteness of the Boltzmann constant kB indicates, the changes in S / kB for even tiny amounts of substances in chemical and physical processes represent amounts of entropy that are extremely large compared to anything in data compression or signal processing. In classical thermodynamics, entropy is defined in terms of macroscopic measurements and makes no reference to any probability distribution, which is central to the definition of information entropy.", "title": "Aspects" }, { "paragraph_id": 49, "text": "The connection between thermodynamics and what is now known as information theory was first made by Ludwig Boltzmann and expressed by his famous equation:", "title": "Aspects" }, { "paragraph_id": 50, "text": "where S {\\displaystyle S} is the thermodynamic entropy of a particular macrostate (defined by thermodynamic parameters such as temperature, volume, energy, etc.), W is the number of microstates (various combinations of particles in various energy states) that can yield the given macrostate, and kB is the Boltzmann constant. It is assumed that each microstate is equally likely, so that the probability of a given microstate is pi = 1/W. When these probabilities are substituted into the above expression for the Gibbs entropy (or equivalently kB times the Shannon entropy), Boltzmann's equation results. In information theoretic terms, the information entropy of a system is the amount of \"missing\" information needed to determine a microstate, given the macrostate.", "title": "Aspects" }, { "paragraph_id": 51, "text": "In the view of Jaynes (1957), thermodynamic entropy, as explained by statistical mechanics, should be seen as an application of Shannon's information theory: the thermodynamic entropy is interpreted as being proportional to the amount of further Shannon information needed to define the detailed microscopic state of the system, that remains uncommunicated by a description solely in terms of the macroscopic variables of classical thermodynamics, with the constant of proportionality being just the Boltzmann constant. Adding heat to a system increases its thermodynamic entropy because it increases the number of possible microscopic states of the system that are consistent with the measurable values of its macroscopic variables, making any complete state description longer. (See article: maximum entropy thermodynamics). Maxwell's demon can (hypothetically) reduce the thermodynamic entropy of a system by using information about the states of individual molecules; but, as Landauer (from 1961) and co-workers have shown, to function the demon himself must increase thermodynamic entropy in the process, by at least the amount of Shannon information he proposes to first acquire and store; and so the total thermodynamic entropy does not decrease (which resolves the paradox). Landauer's principle imposes a lower bound on the amount of heat a computer must generate to process a given amount of information, though modern computers are far less efficient.", "title": "Aspects" }, { "paragraph_id": 52, "text": "Shannon's definition of entropy, when applied to an information source, can determine the minimum channel capacity required to reliably transmit the source as encoded binary digits. Shannon's entropy measures the information contained in a message as opposed to the portion of the message that is determined (or predictable). Examples of the latter include redundancy in language structure or statistical properties relating to the occurrence frequencies of letter or word pairs, triplets etc. The minimum channel capacity can be realized in theory by using the typical set or in practice using Huffman, Lempel–Ziv or arithmetic coding. (See also Kolmogorov complexity.) In practice, compression algorithms deliberately include some judicious redundancy in the form of checksums to protect against errors. The entropy rate of a data source is the average number of bits per symbol needed to encode it. Shannon's experiments with human predictors show an information rate between 0.6 and 1.3 bits per character in English; the PPM compression algorithm can achieve a compression ratio of 1.5 bits per character in English text.", "title": "Aspects" }, { "paragraph_id": 53, "text": "If a compression scheme is lossless – one in which you can always recover the entire original message by decompression – then a compressed message has the same quantity of information as the original but communicated in fewer characters. It has more information (higher entropy) per character. A compressed message has less redundancy. Shannon's source coding theorem states a lossless compression scheme cannot compress messages, on average, to have more than one bit of information per bit of message, but that any value less than one bit of information per bit of message can be attained by employing a suitable coding scheme. The entropy of a message per bit multiplied by the length of that message is a measure of how much total information the message contains. Shannon's theorem also implies that no lossless compression scheme can shorten all messages. If some messages come out shorter, at least one must come out longer due to the pigeonhole principle. In practical use, this is generally not a problem, because one is usually only interested in compressing certain types of messages, such as a document in English, as opposed to gibberish text, or digital photographs rather than noise, and it is unimportant if a compression algorithm makes some unlikely or uninteresting sequences larger.", "title": "Aspects" }, { "paragraph_id": 54, "text": "A 2011 study in Science estimates the world's technological capacity to store and communicate optimally compressed information normalized on the most effective compression algorithms available in the year 2007, therefore estimating the entropy of the technologically available sources.", "title": "Aspects" }, { "paragraph_id": 55, "text": "The authors estimate humankind technological capacity to store information (fully entropically compressed) in 1986 and again in 2007. They break the information into three categories—to store information on a medium, to receive information through one-way broadcast networks, or to exchange information through two-way telecommunication networks.", "title": "Aspects" }, { "paragraph_id": 56, "text": "Entropy is one of several ways to measure biodiversity, and is applied in the form of the Shannon index. A diversity index is a quantitative statistical measure of how many different types exist in a dataset, such as species in a community, accounting for ecological richness, evenness, and dominance. Specifically, Shannon entropy is the logarithm of D, the true diversity index with parameter equal to 1. The Shannon index is related to the proportional abundances of types.", "title": "Aspects" }, { "paragraph_id": 57, "text": "There are a number of entropy-related concepts that mathematically quantify information content of a sequence or message:", "title": "Aspects" }, { "paragraph_id": 58, "text": "(The \"rate of self-information\" can also be defined for a particular sequence of messages or symbols generated by a given stochastic process: this will always be equal to the entropy rate in the case of a stationary process.) Other quantities of information are also used to compare or relate different sources of information.", "title": "Aspects" }, { "paragraph_id": 59, "text": "It is important not to confuse the above concepts. Often it is only clear from context which one is meant. For example, when someone says that the \"entropy\" of the English language is about 1 bit per character, they are actually modeling the English language as a stochastic process and talking about its entropy rate. Shannon himself used the term in this way.", "title": "Aspects" }, { "paragraph_id": 60, "text": "If very large blocks are used, the estimate of per-character entropy rate may become artificially low because the probability distribution of the sequence is not known exactly; it is only an estimate. If one considers the text of every book ever published as a sequence, with each symbol being the text of a complete book, and if there are N published books, and each book is only published once, the estimate of the probability of each book is 1/N, and the entropy (in bits) is −log2(1/N) = log2(N). As a practical code, this corresponds to assigning each book a unique identifier and using it in place of the text of the book whenever one wants to refer to the book. This is enormously useful for talking about books, but it is not so useful for characterizing the information content of an individual book, or of language in general: it is not possible to reconstruct the book from its identifier without knowing the probability distribution, that is, the complete text of all the books. The key idea is that the complexity of the probabilistic model must be considered. Kolmogorov complexity is a theoretical generalization of this idea that allows the consideration of the information content of a sequence independent of any particular probability model; it considers the shortest program for a universal computer that outputs the sequence. A code that achieves the entropy rate of a sequence for a given model, plus the codebook (i.e. the probabilistic model), is one such program, but it may not be the shortest.", "title": "Aspects" }, { "paragraph_id": 61, "text": "The Fibonacci sequence is 1, 1, 2, 3, 5, 8, 13, .... treating the sequence as a message and each number as a symbol, there are almost as many symbols as there are characters in the message, giving an entropy of approximately log2(n). The first 128 symbols of the Fibonacci sequence has an entropy of approximately 7 bits/symbol, but the sequence can be expressed using a formula [F(n) = F(n−1) + F(n−2) for n = 3, 4, 5, ..., F(1) =1, F(2) = 1] and this formula has a much lower entropy and applies to any length of the Fibonacci sequence.", "title": "Aspects" }, { "paragraph_id": 62, "text": "In cryptanalysis, entropy is often roughly used as a measure of the unpredictability of a cryptographic key, though its real uncertainty is unmeasurable. For example, a 128-bit key that is uniformly and randomly generated has 128 bits of entropy. It also takes (on average) 2 127 {\\displaystyle 2^{127}} guesses to break by brute force. Entropy fails to capture the number of guesses required if the possible keys are not chosen uniformly. Instead, a measure called guesswork can be used to measure the effort required for a brute force attack.", "title": "Aspects" }, { "paragraph_id": 63, "text": "Other problems may arise from non-uniform distributions used in cryptography. For example, a 1,000,000-digit binary one-time pad using exclusive or. If the pad has 1,000,000 bits of entropy, it is perfect. If the pad has 999,999 bits of entropy, evenly distributed (each individual bit of the pad having 0.999999 bits of entropy) it may provide good security. But if the pad has 999,999 bits of entropy, where the first bit is fixed and the remaining 999,999 bits are perfectly random, the first bit of the ciphertext will not be encrypted at all.", "title": "Aspects" }, { "paragraph_id": 64, "text": "A common way to define entropy for text is based on the Markov model of text. For an order-0 source (each character is selected independent of the last characters), the binary entropy is:", "title": "Aspects" }, { "paragraph_id": 65, "text": "where pi is the probability of i. For a first-order Markov source (one in which the probability of selecting a character is dependent only on the immediately preceding character), the entropy rate is:", "title": "Aspects" }, { "paragraph_id": 66, "text": "where i is a state (certain preceding characters) and p i ( j ) {\\displaystyle p_{i}(j)} is the probability of j given i as the previous character.", "title": "Aspects" }, { "paragraph_id": 67, "text": "For a second order Markov source, the entropy rate is", "title": "Aspects" }, { "paragraph_id": 68, "text": "A source alphabet with non-uniform distribution will have less entropy than if those symbols had uniform distribution (i.e. the \"optimized alphabet\"). This deficiency in entropy can be expressed as a ratio called efficiency:", "title": "Efficiency (normalized entropy)" }, { "paragraph_id": 69, "text": "Applying the basic properties of the logarithm, this quantity can also be expressed as:", "title": "Efficiency (normalized entropy)" }, { "paragraph_id": 70, "text": "Efficiency has utility in quantifying the effective use of a communication channel. This formulation is also referred to as the normalized entropy, as the entropy is divided by the maximum entropy log b ( n ) {\\displaystyle {\\log _{b}(n)}} . Furthermore, the efficiency is indifferent to choice of (positive) base b, as indicated by the insensitivity within the final logarithm above thereto.", "title": "Efficiency (normalized entropy)" }, { "paragraph_id": 71, "text": "The Shannon entropy is restricted to random variables taking discrete values. The corresponding formula for a continuous random variable with probability density function f(x) with finite or infinite support X {\\displaystyle \\mathbb {X} } on the real line is defined by analogy, using the above form of the entropy as an expectation:", "title": "Entropy for continuous random variables" }, { "paragraph_id": 72, "text": "This is the differential entropy (or continuous entropy). A precursor of the continuous entropy h[f] is the expression for the functional Η in the H-theorem of Boltzmann.", "title": "Entropy for continuous random variables" }, { "paragraph_id": 73, "text": "Although the analogy between both functions is suggestive, the following question must be set: is the differential entropy a valid extension of the Shannon discrete entropy? Differential entropy lacks a number of properties that the Shannon discrete entropy has – it can even be negative – and corrections have been suggested, notably limiting density of discrete points.", "title": "Entropy for continuous random variables" }, { "paragraph_id": 74, "text": "To answer this question, a connection must be established between the two functions:", "title": "Entropy for continuous random variables" }, { "paragraph_id": 75, "text": "In order to obtain a generally finite measure as the bin size goes to zero. In the discrete case, the bin size is the (implicit) width of each of the n (finite or infinite) bins whose probabilities are denoted by pn. As the continuous domain is generalized, the width must be made explicit.", "title": "Entropy for continuous random variables" }, { "paragraph_id": 76, "text": "To do this, start with a continuous function f discretized into bins of size Δ {\\displaystyle \\Delta } . By the mean-value theorem there exists a value xi in each bin such that", "title": "Entropy for continuous random variables" }, { "paragraph_id": 77, "text": "the integral of the function f can be approximated (in the Riemannian sense) by", "title": "Entropy for continuous random variables" }, { "paragraph_id": 78, "text": "where this limit and \"bin size goes to zero\" are equivalent.", "title": "Entropy for continuous random variables" }, { "paragraph_id": 79, "text": "We will denote", "title": "Entropy for continuous random variables" }, { "paragraph_id": 80, "text": "and expanding the logarithm, we have", "title": "Entropy for continuous random variables" }, { "paragraph_id": 81, "text": "As Δ → 0, we have", "title": "Entropy for continuous random variables" }, { "paragraph_id": 82, "text": "Note; log(Δ) → −∞ as Δ → 0, requires a special definition of the differential or continuous entropy:", "title": "Entropy for continuous random variables" }, { "paragraph_id": 83, "text": "which is, as said before, referred to as the differential entropy. This means that the differential entropy is not a limit of the Shannon entropy for n → ∞. Rather, it differs from the limit of the Shannon entropy by an infinite offset (see also the article on information dimension).", "title": "Entropy for continuous random variables" }, { "paragraph_id": 84, "text": "It turns out as a result that, unlike the Shannon entropy, the differential entropy is not in general a good measure of uncertainty or information. For example, the differential entropy can be negative; also it is not invariant under continuous co-ordinate transformations. This problem may be illustrated by a change of units when x is a dimensioned variable. f(x) will then have the units of 1/x. The argument of the logarithm must be dimensionless, otherwise it is improper, so that the differential entropy as given above will be improper. If Δ is some \"standard\" value of x (i.e. \"bin size\") and therefore has the same units, then a modified differential entropy may be written in proper form as:", "title": "Entropy for continuous random variables" }, { "paragraph_id": 85, "text": "and the result will be the same for any choice of units for x. In fact, the limit of discrete entropy as N → ∞ {\\displaystyle N\\rightarrow \\infty } would also include a term of log ( N ) {\\displaystyle \\log(N)} , which would in general be infinite. This is expected: continuous variables would typically have infinite entropy when discretized. The limiting density of discrete points is really a measure of how much easier a distribution is to describe than a distribution that is uniform over its quantization scheme.", "title": "Entropy for continuous random variables" }, { "paragraph_id": 86, "text": "Another useful measure of entropy that works equally well in the discrete and the continuous case is the relative entropy of a distribution. It is defined as the Kullback–Leibler divergence from the distribution to a reference measure m as follows. Assume that a probability distribution p is absolutely continuous with respect to a measure m, i.e. is of the form p(dx) = f(x)m(dx) for some non-negative m-integrable function f with m-integral 1, then the relative entropy can be defined as", "title": "Entropy for continuous random variables" }, { "paragraph_id": 87, "text": "In this form the relative entropy generalizes (up to change in sign) both the discrete entropy, where the measure m is the counting measure, and the differential entropy, where the measure m is the Lebesgue measure. If the measure m is itself a probability distribution, the relative entropy is non-negative, and zero if p = m as measures. It is defined for any measure space, hence coordinate independent and invariant under co-ordinate reparameterizations if one properly takes into account the transformation of the measure m. The relative entropy, and (implicitly) entropy and differential entropy, do depend on the \"reference\" measure m.", "title": "Entropy for continuous random variables" }, { "paragraph_id": 88, "text": "Terence Tao used entropy to make a useful connection trying to solve the Erdős discrepancy problem.", "title": "Use in number theory" }, { "paragraph_id": 89, "text": "Intuitively the idea behind the proof was if there is low information in terms of the Shannon entropy between consecutive random variables (here the random variable is defined using the Liouville function (which is a useful mathematical function for studying distribution of primes) XH = λ ( n + H ) {\\displaystyle \\lambda (n+H)} . And in an interval [n, n+H] the sum over that interval could become arbitrary large. For example, a sequence of +1's (which are values of XH' could take) have trivially low entropy and their sum would become big. But the key insight was showing a reduction in entropy by non negligible amounts as one expands H leading inturn to unbounded growth of a mathematical object over this random variable is equivalent to showing the unbounded growth per the Erdős discrepancy problem.", "title": "Use in number theory" }, { "paragraph_id": 90, "text": "The proof is quite involved and it bought together breakthroughs not just in novel use of Shannon Entropy, but also its used the Liouville function along with averages of modulated multiplicative functions in short intervals. Proving it also broke the \"parity barrier\" for this specific problem.", "title": "Use in number theory" }, { "paragraph_id": 91, "text": "While the use of Shannon Entropy in the proof is novel it is likely to open new research in this direction.", "title": "Use in number theory" }, { "paragraph_id": 92, "text": "Entropy has become a useful quantity in combinatorics.", "title": "Use in combinatorics" }, { "paragraph_id": 93, "text": "A simple example of this is an alternative proof of the Loomis–Whitney inequality: for every subset A ⊆ Z, we have", "title": "Use in combinatorics" }, { "paragraph_id": 94, "text": "where Pi is the orthogonal projection in the ith coordinate:", "title": "Use in combinatorics" }, { "paragraph_id": 95, "text": "The proof follows as a simple corollary of Shearer's inequality: if X1, ..., Xd are random variables and S1, ..., Sn are subsets of {1, ..., d} such that every integer between 1 and d lies in exactly r of these subsets, then", "title": "Use in combinatorics" }, { "paragraph_id": 96, "text": "where ( X j ) j ∈ S i {\\displaystyle (X_{j})_{j\\in S_{i}}} is the Cartesian product of random variables Xj with indexes j in Si (so the dimension of this vector is equal to the size of Si).", "title": "Use in combinatorics" }, { "paragraph_id": 97, "text": "We sketch how Loomis–Whitney follows from this: Indeed, let X be a uniformly distributed random variable with values in A and so that each point in A occurs with equal probability. Then (by the further properties of entropy mentioned above) Η(X) = log|A|, where |A| denotes the cardinality of A. Let Si = {1, 2, ..., i−1, i+1, ..., d}. The range of ( X j ) j ∈ S i {\\displaystyle (X_{j})_{j\\in S_{i}}} is contained in Pi(A) and hence H [ ( X j ) j ∈ S i ] ≤ log | P i ( A ) | {\\displaystyle \\mathrm {H} [(X_{j})_{j\\in S_{i}}]\\leq \\log |P_{i}(A)|} . Now use this to bound the right side of Shearer's inequality and exponentiate the opposite sides of the resulting inequality you obtain.", "title": "Use in combinatorics" }, { "paragraph_id": 98, "text": "For integers 0 < k < n let q = k/n. Then", "title": "Use in combinatorics" }, { "paragraph_id": 99, "text": "where", "title": "Use in combinatorics" }, { "paragraph_id": 100, "text": "A nice interpretation of this is that the number of binary strings of length n with exactly k many 1's is approximately 2 n H ( k / n ) {\\displaystyle 2^{n\\mathrm {H} (k/n)}} .", "title": "Use in combinatorics" }, { "paragraph_id": 101, "text": "Machine learning techniques arise largely from statistics and also information theory. In general, entropy is a measure of uncertainty and the objective of machine learning is to minimize uncertainty.", "title": "Use in machine learning" }, { "paragraph_id": 102, "text": "Decision tree learning algorithms use relative entropy to determine the decision rules that govern the data at each node. The information gain in decision trees I G ( Y , X ) {\\displaystyle IG(Y,X)} , which is equal to the difference between the entropy of Y {\\displaystyle Y} and the conditional entropy of Y {\\displaystyle Y} given X {\\displaystyle X} , quantifies the expected information, or the reduction in entropy, from additionally knowing the value of an attribute X {\\displaystyle X} . The information gain is used to identify which attributes of the dataset provide the most information and should be used to split the nodes of the tree optimally.", "title": "Use in machine learning" }, { "paragraph_id": 103, "text": "Bayesian inference models often apply the principle of maximum entropy to obtain prior probability distributions. The idea is that the distribution that best represents the current state of knowledge of a system is the one with the largest entropy, and is therefore suitable to be the prior.", "title": "Use in machine learning" }, { "paragraph_id": 104, "text": "Classification in machine learning performed by logistic regression or artificial neural networks often employs a standard loss function, called cross-entropy loss, that minimizes the average cross entropy between ground truth and predicted distributions. In general, cross entropy is a measure of the differences between two datasets similar to the KL divergence (also known as relative entropy).", "title": "Use in machine learning" }, { "paragraph_id": 105, "text": "This article incorporates material from Shannon's entropy on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.", "title": "References" } ]
In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable X , which takes values in the alphabet X and is distributed according to p : X → [ 0 , 1 ] : where Σ denotes the sum over the variable's possible values. The choice of base for log , the logarithm, varies for different applications. Base 2 gives the unit of bits, while base e gives "natural units" nat, and base 10 gives units of "dits", "bans", or "hartleys". An equivalent definition of entropy is the expected value of the self-information of a variable. The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his famous source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem. Entropy in information theory is directly analogous to the entropy in statistical thermodynamics. The analogy results when the values of the random variable designate energies of microstates, so Gibbs formula for the entropy is formally identical to Shannon's formula. Entropy has relevance to other areas of mathematics such as combinatorics and machine learning. The definition can be derived from a set of axioms establishing that entropy should be a measure of how informative the average outcome of a variable is. For a continuous random variable, differential entropy is analogous to entropy. The definition E [ − log ⁡ p ] generalizes the above.
2001-07-09T18:56:04Z
2023-12-18T06:27:32Z
[ "Template:Cite journal", "Template:Library resources box", "Template:Rp", "Template:PlanetMath attribution", "Template:More citations needed", "Template:Wikibooks", "Template:Cite conference", "Template:Isbn", "Template:Information theory", "Template:Main", "Template:Quote without source", "Template:Colbegin", "Template:Colend", "Template:Cite web", "Template:Dead link", "Template:Use dmy dates", "Template:Nlab", "Template:Portal", "Template:Reflist", "Template:Cite book", "Template:Compression Methods", "Template:Citation needed", "Template:Math", "Template:Other uses", "Template:Slink", "Template:Springer", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/Entropy_(information_theory)
15,446
Ithaca College
Ithaca College is a private college in Ithaca, New York. It was founded by William Egbert in 1892 as a conservatory of music and is set against the backdrop of the city of Ithaca (which is separate from the town), outside Cayuga Lake, waterfalls, and gorges. Ithaca College is known for the Roy H. Park School of Communications. The college has a liberal arts focus, and offers several pre-professional programs, along with some graduate programs. Ithaca College was founded as the Ithaca Conservatory of Music in 1892 when a local violin teacher, William Grant Egbert, rented four rooms and arranged for the instruction of eight students. For nearly seven decades the institution flourished in the city of Ithaca, adding to its music curriculum the study of elocution, dance, physical education, speech correction, radio, business, and the liberal arts. In 1931 the conservatory was chartered as a private college under its current name, Ithaca College. The college was originally housed in the Boardman House, that later became the Ithaca College Museum of Art, and it was listed on the National Register of Historic Places in 1971. By 1960, some 2,000 students were in attendance. A modern campus was built on South Hill in the 1960s, and students were shuttled between the old and new during the construction. The hillside campus continued to grow in the ensuing 30 years to accommodate more than 6,000 students. As the campus expanded, the college also began to expand its curriculum. By the 1990s, some 2,000 courses in more than 100 programs of study were available in the college's five schools. The school attracts a multicultural student body with representatives from almost every state and from 78 foreign countries. In October 2020, the college announced that 130 out of 547 faculty positions would be cut due to a need to cut $30 million from the school's budget. This in turn was said to be a result of declining enrollment. 4,957 undergraduate students enrolled for Fall 2020 versus 5,852 undergraduates in Fall 2019 and 6,101 in Fall 2018. Ithaca's current president is Dr. LaJerne Terry Cornish. She was named the school's 10th President in March 2022 after having served in as interim President since August 30, 2021. She replaced Shirley M. Collado who departed Ithaca College to become the president and CEO of College Track, a comprehensive college completion program. She was named the ninth president of Ithaca College on February 22, 2017, and assumed the presidency on July 1, 2017. She was previously executive vice chancellor and chief operating officer at Rutgers University–Newark and vice president of student affairs and dean of the college at Middlebury College. She is the first Dominican American to be named president of a college in the United States. During Collado's time as president she was the center of multiple controversies. Collado faced backlash when students and faculty discovered she was accused of sexually abusing a female patient while working as a psychologist in Washington, D.C., in 2000 and was convicted of sexual abuse in 2001. Students further questioned her transparency when she announced plans to cut 116 full-time faculty members, some of which had worked at the school for decades, after receiving a $172,769 payment. Collado eventually announced in July 2021 that she will step down in January to become president and CEO of College Track. Collado succeeded Thomas Rochon, who was named eighth president of Ithaca College on April 11, 2008. Rochon took over as president of the college following Peggy Williams, who had announced on July 12, 2007, that she would retire from the presidency post effective May 31, 2009, following a one-year sabbatical. During the fall 2015 semester, multiple protests focusing on campus climate and Rochon's leadership were led by students and faculty. After multiple racially charged events including student house party themes and racially tinged comments at administration led-programs, students, faculty and staff all decided to hold votes of "no confidence" in Rochon. Students voted "no confidence" by a count of 72% no confidence, 27% confidence, and 1% abstaining. The faculty voted 77.8% no confidence to 22.2% confidence. Rochon retired on July 1, 2017. Ithaca College's current campus was built in the 1960s on South Hill. The college's final academic department moved from downtown to the South Hill campus in 1968, making the move complete. Besides its Ithaca campus, Ithaca College has also operated satellite campuses in other cities. The Ithaca College London Center has been in existence since 1972. Ithaca runs the Ithaca College Los Angeles Program at the James B. Pendleton Center. Former programs include the Ithaca College Antigua Program and the Ithaca College Walkabout Down Under Program in Australia. Ithaca College also operates direct enrollment exchange programs with several universities, including Griffith University, La Trobe University, Murdoch University, and University of Tasmania (Australia); Chengdu Sport University and Beijing Sport University (China); University of Hong Kong (Hong Kong); Masaryk University (Czech Republic); Akita International University and University of Tsukuba (Japan); Hanyang University (Korea); Nanyang Technological University (Singapore); University of Valencia (Spain); and Jönköping University (Sweden). Ithaca College is also affiliated with study abroad programs such as IES Abroad and offers dozens of exchange or study abroad options to students. The college offers a curriculum with more than 100 degree programs in its five schools: Until the spring of 2011, several cross-disciplinary degree programs, along with the Center for the Study of Culture, Race, and Ethnicity, were housed in the Division of Interdisciplinary and International Studies; in 2011, the division was eliminated and its programs, centers and institutes were absorbed into other schools. As of 2017, the most popular majors included visual and performing arts, health professions and related programs, business, management, marketing, and related support services and biological and biomedical Sciences. Historically, various independent and national fraternities and sororities had active chapters at Ithaca College. However, due to a series of highly publicized hazing incidents in the 1980s, including one that was responsible for the death of a student, the college administration reevaluated their Greek life policy and only professional music fraternities were allowed to remain affiliated with the school. As of 2018, three recognized Greek organizations remain on campus, all of which are music-oriented: A fourth house, performing arts fraternity Kappa Gamma Psi (Iota Chapter) became inactive in 2008. Although there are potentially plans to reactivate the chapter, it is unclear whether this will be permitted or not due to the college's policy on Greek Life. However, there are various Greek letter organizations at Ithaca College that are unaffiliated with the school, and therefore not subject to the same housing privileges or rules that contribute to the safety of their members such as non-hazing and non-drinking policies. Additionally, while not particularly common, Ithaca College students may rush for Greek houses affiliated with nearby Ivy institution Cornell University, subject to the rules of each individual fraternity or sorority. Some Cornell-affiliated Greek organizations actively recruit Ithaca College students. There are a few unaffiliated fraternities that some Ithaca College students join - ΔΚΕ (Delta Kappa Epsilon), ΑΕΠ (Alpha Epsilon Pi), ΦΚΣ (Phi Kappa Sigma), ΦΙΑ (Phi Iota Alpha), ΛΥΛ (Lambda Upsilon Lambda), and ΚΣ (Kappa Sigma). There are also unaffiliated sororities including - ΓΔΠ (Gamma Delta Pi), ΠΛΧ (Pi Lambda Chi), ΦΜΖ (Phi Mu Zeta), . Ithaca competes in athletics at the NCAA Division III level as a members of the Liberty League and the Eastern College Athletic Conference (ECAC). Ithaca has one of Division III's strongest athletic programs, with the Bombers winning a total of 14 national titles in seven team sports and five individual sports. Ithaca was previously a member of the Empire 8. The Ithaca athletics nickname "Bombers" is unique in NCAA athletics, and the origins of the nickname are obscure. Ithaca College's sports teams were originally named the Cayugas, but the name was changed to the Bombers sometime in the 1930s. Some other names that have been used for Ithaca College's teams include: Blue Team, Blues, Blue and Gold, Collegians, and the Seneca Streeters. Several possibilities for the change to the "Bombers" have been posited. The most common explanation is that the school's baseball uniforms—white with navy blue pinstripes and an interlocking "IC" on the left chest—bear a striking resemblance to the distinctive home uniforms of the New York Yankees, who are known as the Bronx Bombers. It may also have referred to the Ithaca basketball team of that era and its propensity for half-court "bombs". Grumman Aircraft also manufactured airplanes including bombers in Ithaca for many years. The first "Bombers" reference on record was in the December 17, 1938 issue of the Rochester Times-Union in a men's basketball article. The name has at times sparked controversy for its perceived violent connotations. It is an occasional source of umbrage from Ithaca's prominent pacifist community, but the athletics department has consistently stated it has no interest in changing the name. The athletics logo has in the past incorporated World War II era fighter planes, but currently does not, and the school does not currently have a physical mascot to personify the name. In 2010 the school launched a contest to choose one. It received over 250 suggestions and narrowed the field down to three: a phoenix, a flying squirrel, and a Lake Beast. In June 2011, President Rochon announced that the school would discontinue the search due to opposition in the alumni community. Ithaca College remodeled the Hill Center in 2013. The building features hardwood floors (Ben Light Gymnasium) as well as coaches offices. The building is home to Ithaca's men's and women's basketball teams, women's volleyball team, wrestling, and gymnastics. Ithaca also opened the Athletics & Events Center in 2011, a $65.5 million facility funded by donors. The facility is mainly used by the school's varsity athletes. It has a 47,000 square foot, 9-lane 50 meter Olympic-size pool. The building also has Glazer Arena, a 130,000 square foot event space. It is a track and field center that doubles as a practice facility for lacrosse, field hockey, soccer, baseball, tennis, and football. The facility was designed by the architectural firm Moody Nolan and began construction in June 2009. Coached by Jim Butterfield for 27 years, the football team has won three NCAA Division III National Football Championships in 1979, 1988 and 1991 (a total surpassed only by Augustana, Mount Union and Wisconsin-Whitewater). Bomber football teams made a record seven appearances in the Division III national championship game, the Amos Alonzo Stagg Bowl, which has since been surpassed by Mount Union in 2003. The Bombers play the SUNY Cortland Red Dragons for the Cortaca Jug, which was added in 1959 to an already competitive rivalry. The match-up is one of the most prominent in Division III college football. Gymnastics won the NCAA Division III national championships in 1998. Women's field hockey won the 1982 NCAA Division III Field Hockey Championship. The Men's and Women's Crew programs are housed in the Robert B. Tallman Rowing Center, a $2.6 million boathouse dedicated in 2012. The new boathouse replaced the Haskell Davidson Boathouse, which was constructed in 1974 on Cayuga Inlet. The old boathouse was razed to make room for the new facility. At 8,500 square feet, the Tallman boathouse is almost twice the size of the previous structure. Along with Intercollegiate athletics, Ithaca College has a large intramural sport program. This extracurricular program serves approximately 25% of the undergraduate population yearly. Fourteen traditional team activities are offered throughout the year and include basketball, flag football, kickball, soccer, softball, ultimate frisbee, ski racing, and volleyball. For most activities, divisions are offered for men's, women's, and co-recreational teams. Throughout the year usually two or more activities run concurrently and participants are able to play on a single sex team and co-recreational team for each activity. Ithaca's School of Business was the first college or university business school in the world to achieve LEED Platinum Certification alongside Yale University, which had the second. Ithaca's Peggy Ryan Williams Center is also LEED Platinum certified. It makes extensive use of day light in occupied spaces. There are sensors that regulate lighting and ventilation based on occupancy and natural light. Over 50% of the building energy comes from renewable sources such as wind power. The college also has a LEED Gold Certified building, the Athletics & Events Center. The college composts its dining hall waste, runs a "Take It or Leave It" Green move-out program, and offers a sustainable living option. It also operates an office supply collection and reuse program, as well as a sustainability education program during new student orientation. Ithaca received a B− grade on the Sustainable Endowments Institute's 2009 College Sustainability Report Card and an A− for 2010. In 2017, Ithaca College was listed as one of Princeton Review's top "green colleges" for being environmentally responsible. In the spring of 2007, then-President Peggy R. Williams signed the American College and University President's Climate Commitment (ACUPCC), pledging Ithaca College to the task of developing a strategy and long-range plan to achieve "carbon neutrality" at some point in the future. In 2009 the Ithaca College Board of Trustees approved the Ithaca College Climate Action Plan, which calls for 100% carbon neutrality by 2050. In 2009, the Ithaca College Board of Trustees approved the Ithaca College Climate Action Plan, which calls for 100% carbon neutrality by 2050 and offers a 40-year action plan to work toward that ambitious goal. The college purchases 100 percent of its electricity from renewable sources. Including offsets from a solar farm, the college's overall energy usage is 45 percent carbon neutral. The college aims to optimize investment returns and does not invest the endowment in on-campus sustainability projects, renewable energy funds, or community development loan funds. The college's investment policy reserves the right of the investment committee to restrict investments for any reason, which could include environmental and sustainability factors. While the Ithaca College Natural Lands has issued a statement that Ithaca College should join efforts calling for a moratorium on horizontal drilling and high volume ("slick water") hydraulic fracturing, or fracking, the college as a whole has refused to issue a statement regarding the issue. Ithaca College has over 70,000 alumni, with clubs in Boston, Chicago, Connecticut, Los Angeles, Metro New York, National Capital, North and South Carolina, Philadelphia, Rochester (NY), San Diego, and Southern Florida. Alumni events are hosted in cooperation with city-specific clubs and through a program called "IC on the Road". Notable current and former Ithaca College faculty include:
[ { "paragraph_id": 0, "text": "Ithaca College is a private college in Ithaca, New York. It was founded by William Egbert in 1892 as a conservatory of music and is set against the backdrop of the city of Ithaca (which is separate from the town), outside Cayuga Lake, waterfalls, and gorges.", "title": "" }, { "paragraph_id": 1, "text": "Ithaca College is known for the Roy H. Park School of Communications. The college has a liberal arts focus, and offers several pre-professional programs, along with some graduate programs.", "title": "" }, { "paragraph_id": 2, "text": "Ithaca College was founded as the Ithaca Conservatory of Music in 1892 when a local violin teacher, William Grant Egbert, rented four rooms and arranged for the instruction of eight students. For nearly seven decades the institution flourished in the city of Ithaca, adding to its music curriculum the study of elocution, dance, physical education, speech correction, radio, business, and the liberal arts. In 1931 the conservatory was chartered as a private college under its current name, Ithaca College. The college was originally housed in the Boardman House, that later became the Ithaca College Museum of Art, and it was listed on the National Register of Historic Places in 1971.", "title": "History" }, { "paragraph_id": 3, "text": "By 1960, some 2,000 students were in attendance. A modern campus was built on South Hill in the 1960s, and students were shuttled between the old and new during the construction. The hillside campus continued to grow in the ensuing 30 years to accommodate more than 6,000 students.", "title": "History" }, { "paragraph_id": 4, "text": "As the campus expanded, the college also began to expand its curriculum. By the 1990s, some 2,000 courses in more than 100 programs of study were available in the college's five schools. The school attracts a multicultural student body with representatives from almost every state and from 78 foreign countries. In October 2020, the college announced that 130 out of 547 faculty positions would be cut due to a need to cut $30 million from the school's budget. This in turn was said to be a result of declining enrollment. 4,957 undergraduate students enrolled for Fall 2020 versus 5,852 undergraduates in Fall 2019 and 6,101 in Fall 2018.", "title": "History" }, { "paragraph_id": 5, "text": "Ithaca's current president is Dr. LaJerne Terry Cornish. She was named the school's 10th President in March 2022 after having served in as interim President since August 30, 2021.", "title": "History" }, { "paragraph_id": 6, "text": "She replaced Shirley M. Collado who departed Ithaca College to become the president and CEO of College Track, a comprehensive college completion program. She was named the ninth president of Ithaca College on February 22, 2017, and assumed the presidency on July 1, 2017. She was previously executive vice chancellor and chief operating officer at Rutgers University–Newark and vice president of student affairs and dean of the college at Middlebury College. She is the first Dominican American to be named president of a college in the United States. During Collado's time as president she was the center of multiple controversies. Collado faced backlash when students and faculty discovered she was accused of sexually abusing a female patient while working as a psychologist in Washington, D.C., in 2000 and was convicted of sexual abuse in 2001. Students further questioned her transparency when she announced plans to cut 116 full-time faculty members, some of which had worked at the school for decades, after receiving a $172,769 payment. Collado eventually announced in July 2021 that she will step down in January to become president and CEO of College Track.", "title": "History" }, { "paragraph_id": 7, "text": "Collado succeeded Thomas Rochon, who was named eighth president of Ithaca College on April 11, 2008. Rochon took over as president of the college following Peggy Williams, who had announced on July 12, 2007, that she would retire from the presidency post effective May 31, 2009, following a one-year sabbatical. During the fall 2015 semester, multiple protests focusing on campus climate and Rochon's leadership were led by students and faculty. After multiple racially charged events including student house party themes and racially tinged comments at administration led-programs, students, faculty and staff all decided to hold votes of \"no confidence\" in Rochon. Students voted \"no confidence\" by a count of 72% no confidence, 27% confidence, and 1% abstaining. The faculty voted 77.8% no confidence to 22.2% confidence. Rochon retired on July 1, 2017.", "title": "History" }, { "paragraph_id": 8, "text": "Ithaca College's current campus was built in the 1960s on South Hill. The college's final academic department moved from downtown to the South Hill campus in 1968, making the move complete.", "title": "Campus" }, { "paragraph_id": 9, "text": "Besides its Ithaca campus, Ithaca College has also operated satellite campuses in other cities. The Ithaca College London Center has been in existence since 1972. Ithaca runs the Ithaca College Los Angeles Program at the James B. Pendleton Center.", "title": "Campus" }, { "paragraph_id": 10, "text": "Former programs include the Ithaca College Antigua Program and the Ithaca College Walkabout Down Under Program in Australia.", "title": "Campus" }, { "paragraph_id": 11, "text": "Ithaca College also operates direct enrollment exchange programs with several universities, including Griffith University, La Trobe University, Murdoch University, and University of Tasmania (Australia); Chengdu Sport University and Beijing Sport University (China); University of Hong Kong (Hong Kong); Masaryk University (Czech Republic); Akita International University and University of Tsukuba (Japan); Hanyang University (Korea); Nanyang Technological University (Singapore); University of Valencia (Spain); and Jönköping University (Sweden). Ithaca College is also affiliated with study abroad programs such as IES Abroad and offers dozens of exchange or study abroad options to students.", "title": "Campus" }, { "paragraph_id": 12, "text": "The college offers a curriculum with more than 100 degree programs in its five schools:", "title": "Academics" }, { "paragraph_id": 13, "text": "Until the spring of 2011, several cross-disciplinary degree programs, along with the Center for the Study of Culture, Race, and Ethnicity, were housed in the Division of Interdisciplinary and International Studies; in 2011, the division was eliminated and its programs, centers and institutes were absorbed into other schools.", "title": "Academics" }, { "paragraph_id": 14, "text": "As of 2017, the most popular majors included visual and performing arts, health professions and related programs, business, management, marketing, and related support services and biological and biomedical Sciences.", "title": "Academics" }, { "paragraph_id": 15, "text": "Historically, various independent and national fraternities and sororities had active chapters at Ithaca College. However, due to a series of highly publicized hazing incidents in the 1980s, including one that was responsible for the death of a student, the college administration reevaluated their Greek life policy and only professional music fraternities were allowed to remain affiliated with the school.", "title": "Student life" }, { "paragraph_id": 16, "text": "As of 2018, three recognized Greek organizations remain on campus, all of which are music-oriented:", "title": "Student life" }, { "paragraph_id": 17, "text": "A fourth house, performing arts fraternity Kappa Gamma Psi (Iota Chapter) became inactive in 2008. Although there are potentially plans to reactivate the chapter, it is unclear whether this will be permitted or not due to the college's policy on Greek Life.", "title": "Student life" }, { "paragraph_id": 18, "text": "However, there are various Greek letter organizations at Ithaca College that are unaffiliated with the school, and therefore not subject to the same housing privileges or rules that contribute to the safety of their members such as non-hazing and non-drinking policies. Additionally, while not particularly common, Ithaca College students may rush for Greek houses affiliated with nearby Ivy institution Cornell University, subject to the rules of each individual fraternity or sorority. Some Cornell-affiliated Greek organizations actively recruit Ithaca College students.", "title": "Student life" }, { "paragraph_id": 19, "text": "There are a few unaffiliated fraternities that some Ithaca College students join - ΔΚΕ (Delta Kappa Epsilon), ΑΕΠ (Alpha Epsilon Pi), ΦΚΣ (Phi Kappa Sigma), ΦΙΑ (Phi Iota Alpha), ΛΥΛ (Lambda Upsilon Lambda), and ΚΣ (Kappa Sigma). There are also unaffiliated sororities including - ΓΔΠ (Gamma Delta Pi), ΠΛΧ (Pi Lambda Chi), ΦΜΖ (Phi Mu Zeta), .", "title": "Student life" }, { "paragraph_id": 20, "text": "Ithaca competes in athletics at the NCAA Division III level as a members of the Liberty League and the Eastern College Athletic Conference (ECAC). Ithaca has one of Division III's strongest athletic programs, with the Bombers winning a total of 14 national titles in seven team sports and five individual sports. Ithaca was previously a member of the Empire 8.", "title": "Athletics" }, { "paragraph_id": 21, "text": "The Ithaca athletics nickname \"Bombers\" is unique in NCAA athletics, and the origins of the nickname are obscure. Ithaca College's sports teams were originally named the Cayugas, but the name was changed to the Bombers sometime in the 1930s. Some other names that have been used for Ithaca College's teams include: Blue Team, Blues, Blue and Gold, Collegians, and the Seneca Streeters. Several possibilities for the change to the \"Bombers\" have been posited. The most common explanation is that the school's baseball uniforms—white with navy blue pinstripes and an interlocking \"IC\" on the left chest—bear a striking resemblance to the distinctive home uniforms of the New York Yankees, who are known as the Bronx Bombers. It may also have referred to the Ithaca basketball team of that era and its propensity for half-court \"bombs\". Grumman Aircraft also manufactured airplanes including bombers in Ithaca for many years. The first \"Bombers\" reference on record was in the December 17, 1938 issue of the Rochester Times-Union in a men's basketball article.", "title": "Athletics" }, { "paragraph_id": 22, "text": "The name has at times sparked controversy for its perceived violent connotations. It is an occasional source of umbrage from Ithaca's prominent pacifist community, but the athletics department has consistently stated it has no interest in changing the name. The athletics logo has in the past incorporated World War II era fighter planes, but currently does not, and the school does not currently have a physical mascot to personify the name. In 2010 the school launched a contest to choose one. It received over 250 suggestions and narrowed the field down to three: a phoenix, a flying squirrel, and a Lake Beast. In June 2011, President Rochon announced that the school would discontinue the search due to opposition in the alumni community.", "title": "Athletics" }, { "paragraph_id": 23, "text": "Ithaca College remodeled the Hill Center in 2013. The building features hardwood floors (Ben Light Gymnasium) as well as coaches offices. The building is home to Ithaca's men's and women's basketball teams, women's volleyball team, wrestling, and gymnastics. Ithaca also opened the Athletics & Events Center in 2011, a $65.5 million facility funded by donors. The facility is mainly used by the school's varsity athletes. It has a 47,000 square foot, 9-lane 50 meter Olympic-size pool. The building also has Glazer Arena, a 130,000 square foot event space. It is a track and field center that doubles as a practice facility for lacrosse, field hockey, soccer, baseball, tennis, and football. The facility was designed by the architectural firm Moody Nolan and began construction in June 2009.", "title": "Athletics" }, { "paragraph_id": 24, "text": "Coached by Jim Butterfield for 27 years, the football team has won three NCAA Division III National Football Championships in 1979, 1988 and 1991 (a total surpassed only by Augustana, Mount Union and Wisconsin-Whitewater). Bomber football teams made a record seven appearances in the Division III national championship game, the Amos Alonzo Stagg Bowl, which has since been surpassed by Mount Union in 2003. The Bombers play the SUNY Cortland Red Dragons for the Cortaca Jug, which was added in 1959 to an already competitive rivalry. The match-up is one of the most prominent in Division III college football.", "title": "Athletics" }, { "paragraph_id": 25, "text": "Gymnastics won the NCAA Division III national championships in 1998.", "title": "Athletics" }, { "paragraph_id": 26, "text": "Women's field hockey won the 1982 NCAA Division III Field Hockey Championship.", "title": "Athletics" }, { "paragraph_id": 27, "text": "The Men's and Women's Crew programs are housed in the Robert B. Tallman Rowing Center, a $2.6 million boathouse dedicated in 2012. The new boathouse replaced the Haskell Davidson Boathouse, which was constructed in 1974 on Cayuga Inlet. The old boathouse was razed to make room for the new facility. At 8,500 square feet, the Tallman boathouse is almost twice the size of the previous structure.", "title": "Athletics" }, { "paragraph_id": 28, "text": "Along with Intercollegiate athletics, Ithaca College has a large intramural sport program. This extracurricular program serves approximately 25% of the undergraduate population yearly. Fourteen traditional team activities are offered throughout the year and include basketball, flag football, kickball, soccer, softball, ultimate frisbee, ski racing, and volleyball.", "title": "Intramurals" }, { "paragraph_id": 29, "text": "For most activities, divisions are offered for men's, women's, and co-recreational teams. Throughout the year usually two or more activities run concurrently and participants are able to play on a single sex team and co-recreational team for each activity.", "title": "Intramurals" }, { "paragraph_id": 30, "text": "Ithaca's School of Business was the first college or university business school in the world to achieve LEED Platinum Certification alongside Yale University, which had the second. Ithaca's Peggy Ryan Williams Center is also LEED Platinum certified. It makes extensive use of day light in occupied spaces. There are sensors that regulate lighting and ventilation based on occupancy and natural light. Over 50% of the building energy comes from renewable sources such as wind power. The college also has a LEED Gold Certified building, the Athletics & Events Center. The college composts its dining hall waste, runs a \"Take It or Leave It\" Green move-out program, and offers a sustainable living option. It also operates an office supply collection and reuse program, as well as a sustainability education program during new student orientation. Ithaca received a B− grade on the Sustainable Endowments Institute's 2009 College Sustainability Report Card and an A− for 2010.", "title": "Sustainability" }, { "paragraph_id": 31, "text": "In 2017, Ithaca College was listed as one of Princeton Review's top \"green colleges\" for being environmentally responsible.", "title": "Sustainability" }, { "paragraph_id": 32, "text": "In the spring of 2007, then-President Peggy R. Williams signed the American College and University President's Climate Commitment (ACUPCC), pledging Ithaca College to the task of developing a strategy and long-range plan to achieve \"carbon neutrality\" at some point in the future. In 2009 the Ithaca College Board of Trustees approved the Ithaca College Climate Action Plan, which calls for 100% carbon neutrality by 2050. In 2009, the Ithaca College Board of Trustees approved the Ithaca College Climate Action Plan, which calls for 100% carbon neutrality by 2050 and offers a 40-year action plan to work toward that ambitious goal.", "title": "Sustainability" }, { "paragraph_id": 33, "text": "The college purchases 100 percent of its electricity from renewable sources. Including offsets from a solar farm, the college's overall energy usage is 45 percent carbon neutral.", "title": "Sustainability" }, { "paragraph_id": 34, "text": "The college aims to optimize investment returns and does not invest the endowment in on-campus sustainability projects, renewable energy funds, or community development loan funds. The college's investment policy reserves the right of the investment committee to restrict investments for any reason, which could include environmental and sustainability factors.", "title": "Sustainability" }, { "paragraph_id": 35, "text": "While the Ithaca College Natural Lands has issued a statement that Ithaca College should join efforts calling for a moratorium on horizontal drilling and high volume (\"slick water\") hydraulic fracturing, or fracking, the college as a whole has refused to issue a statement regarding the issue.", "title": "Sustainability" }, { "paragraph_id": 36, "text": "Ithaca College has over 70,000 alumni, with clubs in Boston, Chicago, Connecticut, Los Angeles, Metro New York, National Capital, North and South Carolina, Philadelphia, Rochester (NY), San Diego, and Southern Florida. Alumni events are hosted in cooperation with city-specific clubs and through a program called \"IC on the Road\".", "title": "Notable alumni" }, { "paragraph_id": 37, "text": "Notable current and former Ithaca College faculty include:", "title": "Notable faculty" } ]
Ithaca College is a private college in Ithaca, New York. It was founded by William Egbert in 1892 as a conservatory of music and is set against the backdrop of the city of Ithaca, outside Cayuga Lake, waterfalls, and gorges. Ithaca College is known for the Roy H. Park School of Communications. The college has a liberal arts focus, and offers several pre-professional programs, along with some graduate programs.
2002-02-25T15:51:15Z
2023-11-26T02:27:51Z
[ "Template:Ithaca College", "Template:Infobox university", "Template:Infobox US university ranking", "Template:Whom", "Template:Official website", "Template:Main", "Template:Coord", "Template:Reflist", "Template:NRISref", "Template:Dead link", "Template:Liberty League navbox", "Template:Central New York colleges", "Template:Webarchive", "Template:Commons category", "Template:Ithaca, New York", "Template:Authority control", "Template:Short description", "Template:As of", "Template:Cite web", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Ithaca_College
15,447
Differential psychology
Differential psychology studies the ways in which individuals differ in their behavior and the processes that underlie it. This is a discipline that develops classifications (taxonomies) of psychological individual differences. This is distinguished from other aspects of psychology in that although psychology is ostensibly a study of individuals, modern psychologists often study groups, or attempt to discover general psychological processes that apply to all individuals. This particular area of psychology was first named and still retains the name of "differential psychology" by William Stern in his book (1900). While prominent psychologists, including Stern, have been widely credited for the concept of individual differences, historical records show that it was Charles Darwin (1859) who first spurred the scientific interest in the study of individual differences. His interest was further pursued by his half-cousin Francis Galton in his attempt to quantify individual differences among people. For example, in evaluating the effectiveness of a new therapy, the mean performance of the therapy in one treatment group might be compared to the mean effectiveness of a placebo (or a well-known therapy) in a second, control group. In this context, differences between individuals in their reaction to the experimental and control manipulations are actually treated as errors rather than as interesting phenomena to study. This approach is applied because psychological research depends upon statistical controls that are only defined upon groups of people. Importantly, individuals can also differ not only in their current state, but in the magnitude or even direction of response to a given stimulus. Such phenomena, often explained in terms of inverted-U response curves, place differential psychology at an important location in such endeavours as personalized medicine, in which diagnoses are customised for an individual's response profile. Individual differences research typically includes personality, temperament (neuro-chemically based behavioural traits), motivation, intelligence, ability, IQ, interests, values, self-concept, self-efficacy, and self-esteem (to name just a few). Although the United States has seen a decrease in individual differences research since the 1960s, researchers are found in a variety of applied and experimental fields. These fields include clinical psychology, educational psychology, Industrial and organizational psychology, personality psychology, social psychology, behavioral genetics, and developmental psychology programs, in the neo-Piagetian theories of cognitive development in particular. To study individual differences, psychologists use a variety of methods. The method is to compare and analyze the psychology and behaviour of individuals or groups under different environmental conditions. By correlating observed psychological and behavioral differences with known accompanying environments, the relative roles of different variables in psychological and behavioral development can be probed. Psychophysiological experiments on both humans and other mammals include EEG, PET-scans, MRI, functional MRI, neurochemistry experiments with neurotransmitter and hormonal systems, caffeine and controlled drug challenges. These methods can be used for a search of biomarkers of consistent, biologically based behavioural patterns (temperament traits and symptoms of psychiatric disorders). Other sets of methods include behavioural experiments, to see how different people behave in similar settings. Behavioural experiments are often used in personality and social psychology, and include lexical and self-report methods where people are asked to complete paper-based and computer-based forms prepared by psychologists. Jarl, Vidkunn Coucheron (1958). "Historical Note on the Term Differential Psychology". Nordisk Psykologi. 10 (2): 114–116. doi:10.1080/00291463.1958.10780375.
[ { "paragraph_id": 0, "text": "Differential psychology studies the ways in which individuals differ in their behavior and the processes that underlie it. This is a discipline that develops classifications (taxonomies) of psychological individual differences. This is distinguished from other aspects of psychology in that although psychology is ostensibly a study of individuals, modern psychologists often study groups, or attempt to discover general psychological processes that apply to all individuals. This particular area of psychology was first named and still retains the name of \"differential psychology\" by William Stern in his book (1900).", "title": "" }, { "paragraph_id": 1, "text": "While prominent psychologists, including Stern, have been widely credited for the concept of individual differences, historical records show that it was Charles Darwin (1859) who first spurred the scientific interest in the study of individual differences. His interest was further pursued by his half-cousin Francis Galton in his attempt to quantify individual differences among people.", "title": "" }, { "paragraph_id": 2, "text": "For example, in evaluating the effectiveness of a new therapy, the mean performance of the therapy in one treatment group might be compared to the mean effectiveness of a placebo (or a well-known therapy) in a second, control group. In this context, differences between individuals in their reaction to the experimental and control manipulations are actually treated as errors rather than as interesting phenomena to study. This approach is applied because psychological research depends upon statistical controls that are only defined upon groups of people.", "title": "" }, { "paragraph_id": 3, "text": "Importantly, individuals can also differ not only in their current state, but in the magnitude or even direction of response to a given stimulus. Such phenomena, often explained in terms of inverted-U response curves, place differential psychology at an important location in such endeavours as personalized medicine, in which diagnoses are customised for an individual's response profile.", "title": "Importance of individual differences" }, { "paragraph_id": 4, "text": "Individual differences research typically includes personality, temperament (neuro-chemically based behavioural traits), motivation, intelligence, ability, IQ, interests, values, self-concept, self-efficacy, and self-esteem (to name just a few). Although the United States has seen a decrease in individual differences research since the 1960s, researchers are found in a variety of applied and experimental fields. These fields include clinical psychology, educational psychology, Industrial and organizational psychology, personality psychology, social psychology, behavioral genetics, and developmental psychology programs, in the neo-Piagetian theories of cognitive development in particular.", "title": "Areas of study" }, { "paragraph_id": 5, "text": "To study individual differences, psychologists use a variety of methods. The method is to compare and analyze the psychology and behaviour of individuals or groups under different environmental conditions. By correlating observed psychological and behavioral differences with known accompanying environments, the relative roles of different variables in psychological and behavioral development can be probed. Psychophysiological experiments on both humans and other mammals include EEG, PET-scans, MRI, functional MRI, neurochemistry experiments with neurotransmitter and hormonal systems, caffeine and controlled drug challenges. These methods can be used for a search of biomarkers of consistent, biologically based behavioural patterns (temperament traits and symptoms of psychiatric disorders). Other sets of methods include behavioural experiments, to see how different people behave in similar settings. Behavioural experiments are often used in personality and social psychology, and include lexical and self-report methods where people are asked to complete paper-based and computer-based forms prepared by psychologists.", "title": "Methods of research" }, { "paragraph_id": 6, "text": "Jarl, Vidkunn Coucheron (1958). \"Historical Note on the Term Differential Psychology\". Nordisk Psykologi. 10 (2): 114–116. doi:10.1080/00291463.1958.10780375.", "title": "References" } ]
Differential psychology studies the ways in which individuals differ in their behavior and the processes that underlie it. This is a discipline that develops classifications (taxonomies) of psychological individual differences. This is distinguished from other aspects of psychology in that although psychology is ostensibly a study of individuals, modern psychologists often study groups, or attempt to discover general psychological processes that apply to all individuals. This particular area of psychology was first named and still retains the name of "differential psychology" by William Stern in his book (1900). While prominent psychologists, including Stern, have been widely credited for the concept of individual differences, historical records show that it was Charles Darwin (1859) who first spurred the scientific interest in the study of individual differences. His interest was further pursued by his half-cousin Francis Galton in his attempt to quantify individual differences among people. For example, in evaluating the effectiveness of a new therapy, the mean performance of the therapy in one treatment group might be compared to the mean effectiveness of a placebo in a second, control group. In this context, differences between individuals in their reaction to the experimental and control manipulations are actually treated as errors rather than as interesting phenomena to study. This approach is applied because psychological research depends upon statistical controls that are only defined upon groups of people.
2001-12-27T16:23:06Z
2023-12-03T00:38:16Z
[ "Template:Reflist", "Template:Cite book", "Template:Psychology", "Template:Authority control", "Template:Short description", "Template:Multiple issues", "Template:Clear", "Template:Library resources box", "Template:Cn", "Template:Cite journal", "Template:Citation", "Template:Portal bar" ]
https://en.wikipedia.org/wiki/Differential_psychology
15,450
Industrial and organizational psychology
Industrial and organizational psychology (I-O psychology) "focuses the lens of psychological science on a key aspect of human life, namely, their work lives. In general, the goals of I-O psychology are to better understand and optimize the effectiveness, health, and well-being of both individuals and organizations." It is an applied discipline within psychology and is an international profession. I-O psychology is also known as occupational psychology in the United Kingdom, organisational psychology in Australia and New Zealand, and work and organizational (WO) psychology throughout Europe and Brazil. Industrial, work, and organizational (IWO) psychology is the broader, more global term for the science and profession. I-O psychologists are trained in the scientist–practitioner model. As an applied field, the discipline involves both research and practice and I-O psychologists apply psychological theories and principles to organizations and the individuals within them. They contribute to an organization's success by improving the job performance, wellbeing, motivation, job satisfaction and the health and safety of employees. An I-O psychologist conducts research on employee attitudes, behaviors, emotions, motivation, and stress. The field is concerned with how these things can be improved through recruitment processes, training programs, feedback, management systems and other interventions. I-O psychology research and practice also includes the work–nonwork interface such as selecting and transitioning into a new career, occupational burnout, unemployment, retirement, and work-family conflict and balance. I-O psychology is one of the 17 recognized professional specialties by the American Psychological Association (APA). In the United States the profession is represented by Division 14 of the APA and is formally known as the Society for Industrial and Organizational Psychology (SIOP). Similar I-O psychology societies can be found in many countries. In 2009 the Alliance for Organizational Psychology was formed and is a federation of Work, Industrial, & Organizational Psychology societies and "network partners" from around the world. I-O psychology is an international science and profession and depending on the region of the world, it is referred to by different names. In North America, Canada and South Africa the title "I-O" psychology is used; in the United Kingdom, the field is known as occupational psychology. Occupational psychology in the UK is one of nine "protected titles" within the "practitioner psychologist" professions. The profession is regulated by the Health and Care Professions Council. In the UK, graduate programs in psychology, including occupational psychology, are accredited by the British Psychological Society. In Europe, someone with a specialist EuroPsy Certificate in Work and Organisational Psychology is a fully qualified psychologist and a specialist in the work psychology field. Industrial and organizational psychologists reaching the EuroPsy standard are recorded in the Register of European Psychologists. I-O psychology is one of the three main psychology specializations in Europe. In Australia, the title "organisational psychologist" is protected by law and regulated by the Australian Health Practitioner Regulation Agency (AHPRA). Organizational psychology is one of nine areas of specialist endorsement for psychology practice in Australia. In South Africa, industrial psychology is a registration category for the profession of psychologist as regulated by the Health Professions Council of South Africa (HPCSA). In 2009 The Alliance for Organizational psychology was formed and is a federation of Work, Industrial, & Organizational Psychology societies and "network partners" from around the world. In 2021 The British Psychological Society (BPS) Division of Occupational Psychology (DOP) and the Australian Psychological Society's (APS) College of Organizational Psychology joined the Alliance. The Alliance currently has member organizations representing Industrial, Work and Organisational psychology and IWO psychologists from Australia, Britain, Brazil, Canada, Chile, Europe, Germany, Hong Kong, Japan, Netherlands, New Zealand, Singapore, South Africa and the United States. The historical development of I-O psychology was paralleled in the US, the UK, Australia, Germany, the Netherlands, and Eastern European countries such as Romania. The roots of I-O psychology trace back to almost the beginning of psychology as a science, when Wilhelm Wundt founded one of the first psychological laboratories in 1879 in Leipzig, Germany. In the mid–1880s, Wundt trained two psychologists, Hugo Münsterberg and James McKeen Cattell, who went on to have a major influence on the emergence of I-O psychology. World War I was an impetus for the development of the field simultaneously in the UK and US. Munsterberg, one of the founders of I-O psychology, wrote, "Our aim is to sketch the outlines of a new science which is intermediate between the modern laboratory psychology and the problems of economics: the psychological experiment is systematically to be placed at the service of commerce and industry" (p. 3). Instead of viewing performance differences as human "errors," Cattell was one of the first to recognize the importance of differences among individuals as a way of better understanding work behavior. Walter Dill Scott, who was a contemporary of Cattell and was elected President of the American Psychological Association (APA) in 1919, was arguably the most prominent I-O psychologist of his time. Scott, along with Walter Van Dyke Bingham, worked at what was then Carnegie Institute of Technology, developing methods for selecting and training sales personnel. The "industrial" side of I-O psychology originated in research on individual differences, assessment, and the prediction of work performance. Industrial psychology crystallized during World War I, in response to the need to rapidly assign new troops to duty. Scott and Bingham volunteered to help with the testing and placement of more than a million U.S. Army recruits. In 1917, together with other prominent psychologists, they adapted a well-known intelligence test the Stanford–Binet, which was designed for testing one individual at a time, to make it suitable for group testing. The new test was called the Army Alpha. After the War, increasing employment in the U.S. created opportunities for I-O psychology practitioners who called themselves "industrial psychologists" The "organizational" side of the field was focused on employee behavior, feelings, and well-being. During World War I, with the U.K. government's interest in worker productivity in munitions factories, Charles Myers studied worker fatigue and well-being. Following the war, Elton Mayo found that rest periods improved morale and reduced turnover in a Philadelphia textile factory. He later joined the ongoing Hawthorne studies, where he became interested in how workers' emotions and informal relationships affected productivity. The results of these studies ushered in the human relations movement. World War II brought renewed interest in ability testing. The U.S. military needed to accurately place recruits in new technologically advanced jobs. There was also concern with morale and fatigue in war-industry workers. In the 1960s Arthur Kornhauser examined the impact on productivity of hiring mentally unstable workers. Kornhauser also examined the link between industrial working conditions and worker mental health as well as the spillover into a worker's personal life of having an unsatisfying job. Zickar noted that most of Kornhauser's I-O contemporaries favored management and Kornhauser was largely alone in his interest in protecting workers. Vinchur and Koppes (2010) observed that I-O psychologists' interest in job stress is a relatively recent development (p. 22). The industrial psychology division of the former American Association of Applied Psychology became a division within APA, becoming Division 14 of APA. It was initially called the Industrial and Business Psychology Division. In 1962, the name was changed to the Industrial Psychology Division. In 1973, it was renamed again, this time to the Division of Industrial and Organizational Psychology. In 1982, the unit become more independent of APA, and its name was changed again, this time to the Society for Industrial and Organizational Psychology. The name change of the division from "industrial psychology" to "industrial and organizational psychology" reflected the shift in the work of industrial psychologists who had originally addressed work behavior from the individual perspective, examining performance and attitudes of individual workers. Their work became broader. Group behavior in the workplace became a worthy subject of study. The emphasis on the "organizational" underlined the fact that when an individual joins an organization (e.g., the organization that hired him or her), he or she will be exposed to a common goal and a common set of operating procedures. In the 1970s in the UK, references to occupational psychology became more common than references to I-O psychology. According to Bryan and Vinchur, "while organizational psychology increased in popularity through [the 1960s and 1970s], research and practice in the traditional areas of industrial psychology continued, primarily driven by employment legislation and case law". There was a focus on fairness and validity in selection efforts as well as in the job analyses that undergirded selection instruments. For example, I-O psychology showed increased interest in behaviorally anchored rating scales. What critics there were of I-O psychology accused the discipline of being responsive only to the concerns of management. From the 1980s to 2010s, other changes in I-O psychology took place. Researchers increasingly adopted a multi-level approach, attempting to understand behavioral phenomena from both the level of the organization and the level of the individual worker. There was also an increased interest in the needs and expectations of employees as individuals. For example, an emphasis on organizational justice and the psychological contract took root, as well as the more traditional concerns of selection and training. Methodological innovations (e.g., meta-analyses, structural equation modeling) were adopted. With the passage of the American with Disabilities Act in 1990 and parallel legislation elsewhere in the world, I-O psychology saw an increased emphasis on "fairness in personnel decisions." Training research relied increasingly on advances in educational psychology and cognitive science. I-O researchers employ both qualitative and quantitative methods, although quantitative methods are far more common. Basic Quantitative methods used in I-O psychology include correlation, multiple regression, and analysis of variance. More advanced statistical methods include logistic regression, structural equation modeling, and hierarchical linear modeling (HLM; also known as multilevel modeling). I-O researchers have also employed meta-analysis. I-O psychologists also employ psychometric methods including methods associated with classical test theory, generalizability theory, and item response theory (IRT). I-O psychologists have also employed qualitative methods, which largely involve focus groups, interviews, and case studies. I-O psychologists conducting research on organizational culture have employed ethnographic techniques and participant observation. A qualitative technique associated with I-O psychology is Flanagan's critical incident technique. I-O psychologists have also coordinated the use of quantitative and qualitative methods in the same study, I-O psychologists deal with a wide range of topics concerning people in the workplace. Job analysis encompasses a number of different methods including, but not limited to, interviews, questionnaires, task analysis, and observation. A job analysis primarily involves the systematic collection of information about a job. A task-oriented job analysis involves an assessment of the duties, tasks, and/or competencies a job requires. By contrast, a worker-oriented job analysis involves an examination of the knowledge, skills, abilities, and other characteristics (KSAOs) required to successfully perform the work. Information obtained from job analyses are used for many purposes, including the creation job-relevant selection procedures, the development of criteria for performance appraisals, the conducting of performance appraisals, and the development and implementation of training programs. I-O psychologists design (a) recruitment processes and (b) personnel selection systems. Personnel recruitment is the process of identifying qualified candidates in the workforce and getting them to apply for jobs within an organization. Personnel recruitment processes include developing job announcements, placing ads, defining key qualifications for applicants, and screening out unqualified applicants. Personnel selection is the systematic process of hiring and promoting personnel. Personnel selection systems employing I-O methods use quantitative data to determine the most qualified candidates. This can involve the use of psychological tests, Biographical Information Blanks, interviews, work samples, and assessment centers. Personnel selection procedures are usually validated, i.e., shown to be job relevant to personnel selection, using one or more of the following types of validity: content validity, construct validity, and/or criterion-related validity. I-O psychologists must adhere to professional standards in personnel selection efforts. SIOP (e.g., Principles for validation and use of personnel selection procedures) and APA together with the National Council on Measurement in Education (e.g., Standards for educational and psychological testing are sources of those standards. The Equal Employment Opportunity Commission's Uniform guidelines are also influential in guiding personnel selection decisions. A meta-analysis of selection methods found that general mental ability (g factor) was the best overall predictor of job performance and attainment in training. Performance appraisal or performance evaluation is the process in which an individual's or a group's work behaviors and outcomes are assessed against managers' and others' expectations for the job. Performance appraisal is used for a variety of purposes including the basis for employment decisions (promotion, raises and termination), feedback to employees, and training needs assessment. Performance management is the process of providing performance feedback relative to expectations and information relevant to helping a worker improve his or her performance (e.g., coaching, mentoring). Performance management may also include documenting and tracking performance information for organizational evaluation purposes. Individual assessment involves the measurement of individual differences. I-O psychologists perform individual assessments in order to evaluate differences among candidates for employment as well as differences among employees. The constructs measured pertain to job performance. With candidates for employment, individual assessment is often part of the personnel selection process. These assessments can include written tests, aptitude tests, physical tests, psycho-motor tests, personality tests, integrity and reliability tests, work samples, simulations, and assessment centres. A more recent focus of I-O field is the health, safety, and well-being of employees. Topics include occupational safety, occupational stress, and workplace bullying, aggression and violence. There are many features of work that can be stressful to employees. Research has identified a number of job stressors (environmental conditions at work) that contribute to strains (adverse behavioral, emotional, physical, and psychological reactions). Occupational stress can have implications for organizational performance because of the emotions job stress evokes. For example, a job stressor such as conflict with a supervisor can precipitate anger that in turn motivates counterproductive workplace behaviors. A number of prominent models of job stress have been developed to explain the job stress process, including the person-environment (P-E) fit model, which was developed by University of Michigan social psychologists, and the demand-control(-support) and effort-reward imbalance models, which were developed by sociologists. Research has also examined occupational stress in specific occupations, including police, general practitioners, and dentists. Another concern has been the relation of occupational stress to family life. Other I-O researchers have examined gender differences in leadership style and job stress and strain in the context of male- and female-dominated industries, and unemployment-related distress. Occupational stress has also been linked to lack of fit between people and their jobs. Accidents and safety in the workplace are important because of the serious injuries and fatalities that are all too common. Research has linked accidents to psychosocial factors in the workplace including overwork that leads to fatigue, workplace violence, and working night shifts. "Stress audits" can help organizations remain compliant with various occupational safety regulations. Psychosocial hazards can affect musculoskeletal disorders. A psychosocial factor related to accident risk is safety climate, which refers to employees' perceptions of the extent to which their work organization prioritizes safety. By contrast, psychosocial safety climate refers to management's "policies, practices, and procedures" aimed at protecting workers' psychological health. Research on safety leadership is also relevant to understanding employee safety performance. Research suggests that safety-oriented transformational leadership is associated with a positive safety climate and safe worker practices. I-O psychologists are concerned with the related topics of workplace bullying, aggression, and violence. For example, I-O research found that exposure to workplace violence elicited ruminative thinking. Ruminative thinking is associated with poor well-being. Research has found that interpersonal aggressive behaviour is associated with worse team performance. A new discipline, occupational health psychology (OHP), emerged from both health psychology and I-O psychology as well as occupational medicine. OHP concerns itself with such topic areas as the impact of occupational stressors on mental and physical health, the health impact of involuntary unemployment, violence and bullying in the workplace, psychosocial factors that influence accident risk and safety, work-family balance, and interventions designed to improve/protect worker health. Spector observed that one of the problems facing I-O psychologists in the late 20 century who were interested in the health of working people was resistance within the field to publishing papers on worker health. In the 21 century, OHP topics have become popular at the Society for Industrial and Organizational Psychology conference. Work design concerns the "content and organisation of one's work tasks, activities, relationships, and responsibilities." Research has demonstrated that work design has important implications for individual employees (e.g., level of engagement, job strain, chance of injury), teams (e.g., how effectively teams co-ordinate their activities), organisations (e.g., productivity, safety, efficiency targets), and society (e.g., whether a nation utilises the skills of its population or promotes effective aging). I-O psychologists review job tasks, relationships, and an individual's way of thinking about their work to ensure that their roles are meaningful and motivating, thus creating greater productivity and job satisfaction. Deliberate interventions aimed at altering work design are sometimes referred to as work redesign. Such interventions can be initiated by the management of an organization (e.g., job rotation, job enlargement, job enrichment) or by individual workers (e.g., job crafting, role innovation, idiosyncratic ideals). Training involves the systematic teaching of skills, concepts, or attitudes that results in improved performance in another environment. Because many people hired for a job are not already versed in all the tasks the job requires, training may be needed to help the individual perform the job effectively. Evidence indicates that training is often effective, and that it succeeds in terms of higher net sales and gross profitability per employee. Similar to performance management (see above), an I-O psychologist would employ a job analysis in concert with the application of the principles of instructional design to create an effective training program. A training program is likely to include a summative evaluation at its conclusion in order to ensure that trainees have met the training objectives and can perform the target work tasks at an acceptable level. Kirkpatrick describes four levels of criteria by which to evaluate training: Training programs often include formative evaluations to assess the effect of the training as the training proceeds. Formative evaluations can be used to locate problems in training procedures and help I-O psychologists make corrective adjustments while training is ongoing. The foundation for training programs is learning. Learning outcomes can be organized into three broad categories: cognitive, skill-based, and affective outcomes. Cognitive training is aimed at instilling declarative knowledge or the knowledge of rules, facts, and principles (e.g., police officer training covers laws and court procedures). Skill-based training aims to impart procedural knowledge (e.g., skills needed to use a special tool) or technical skills (e.g., understanding the workings of software program). Affective training concerns teaching individuals to develop specific attitudes or beliefs that predispose trainees to behave a certain way (e.g., show commitment to the organization, appreciate diversity). A needs assessment, an analysis of corporate and individual goals, is often undertaken prior to the development of a training program. In addition, a careful training needs analysis is required in order to develop a systematic understanding of where training is needed, what should be taught, and who will be trained. A training needs analysis typically involves a three-step process that includes organizational analysis, task analysis, and person analysis. An organizational analysis is an examination of organizational goals and resources as well as the organizational environment. The results of an organizational analysis help to determine where training should be directed. The analysis identifies the training needs of different departments or subunits. It systematically assesses manager, peer, and technological support for transfer of training. An organizational analysis also takes into account the climate of the organization and its subunits. For example, if a climate for safety is emphasized throughout the organization or in subunits of the organization (e.g., production), then training needs will likely reflect an emphasis on safety. A task analysis uses the results of a job analysis to determine what is needed for successful job performance, contributing to training content. With organizations increasingly trying to identify "core competencies" that are required for all jobs, task analysis can also include an assessment of competencies. A person analysis identifies which individuals within an organization should receive training and what kind of instruction they need. Employee needs can be assessed using a variety of methods that identify weaknesses that training can address. Work motivation reflects the energy an individual applies "to initiate work-related behavior, and to determine its form, direction, intensity, and duration" Understanding what motivates an organization's employees is central to I-O psychology. Motivation is generally thought of as a theoretical construct that fuels behavior. An incentive is an anticipated reward that is thought to incline a person to behave a certain way. Motivation varies among individuals. Studying its influence on behavior, it must be examined together with ability and environmental influences. Because of motivation's role in influencing workplace behavior and performance, many organizations structure the work environment to encourage productive behaviors and discourage unproductive behaviors. Motivation involves three psychological processes: arousal, direction, and intensity. Arousal is what initiates action. It is often fueled by a person's need or desire for something that is missing from his or her life, either totally or partially. Direction refers to the path employees take in accomplishing the goals they set for themselves. Intensity is the amount of energy employees put into goal-directed work performance. The level of intensity often reflects the importance and difficulty of the goal. These psychological processes involve four factors. First, motivation serves to direct attention, focusing on particular issues, people, tasks, etc. Second, it serves to stimulate effort. Third, motivation influences persistence. Finally, motivation influences the choice and application of task-related strategies. Organizational climate is the perceptions of employees about what is important in an organization, that is, what behaviors are encouraged versus discouraged. It can be assessed in individual employees (climate perceptions) or averaged across groups of employees within a department or organization (organizational climate). Climates are usually focused on specific employee outcomes, or what is called “climate for something”. There are more than a dozen types of climates that have been assessed and studied. Some of the more popular include: Climate concerns organizational policies and practices that encourage or discourage specific behaviors by employees. Shared perceptions of what the organization emphasizes (organizational climate) is part of organizational culture, but culture concerns far more than shared perceptions, as discussed in the next section. While there is no universal definition for organizational culture, a collective understanding shares the following assumptions: ... that they are related to history and tradition, have some depth, are difficult to grasp and account for, and must be interpreted; that they are collective and shared by members of groups and primarily ideational in character, having to do with values, understandings, beliefs, knowledge, and other intangibles; and that they are holistic and subjective rather than strictly rational and analytical. Organizational culture has been shown to affect important organizational outcomes such as performance, attraction, recruitment, retention, employee satisfaction, and employee well-being. There are three levels of organizational culture: artifacts, shared values, and basic beliefs and assumptions. Artifacts comprise the physical components of the organization that relay cultural meaning. Shared values are individuals' preferences regarding certain aspects of the organization's culture (e.g., loyalty, customer service). Basic beliefs and assumptions include individuals' impressions about the trustworthiness and supportiveness of an organization, and are often deeply ingrained within the organization's culture. In addition to an overall culture, organizations also have subcultures. Subcultures can be departmental (e.g. different work units) or defined by geographical distinction. While there is no single "type" of organizational culture, some researchers have developed models to describe different organizational cultures. Group behavior involves the interactions among individuals in a collective. Most I-O group research is about teams which is a group in which people work together to achieve the same task goals. The individuals' opinions, attitudes, and adaptations affect group behavior, with group behavior in turn affecting those opinions, etc. The interactions are thought to fulfill some need satisfaction in an individual who is part of the collective. Organizations often organize teams because teams can accomplish a much greater amount of work in a short period of time than an individual can accomplish. I-O research has examined the harm workplace aggression does to team performance. Team composition, or the configuration of team member knowledge, skills, abilities, and other characteristics, fundamentally influences teamwork. Team composition can be considered in the selection and management of teams to increase the likelihood of team success. To achieve high-quality results, teams built with members having higher skill levels are more likely to be effective than teams built around members having lesser skills; teams that include a members with a diversity of skills are also likely to show improved team performance. Team members should also be compatible in terms of personality traits, values, and work styles. There is substantial evidence that personality traits and values can shape the nature of teamwork, and influence team performance. A fundamental question in team task design is whether or not a task is even appropriate for a team. Those tasks that require predominantly independent work are best left to individuals, and team tasks should include those tasks that consist primarily of interdependent work. When a given task is appropriate for a team, task design can play a key role in team effectiveness. Job characteristic theory identifies core job dimensions that affect motivation, satisfaction, performance, etc. These dimensions include skill variety, task identity, task significance, autonomy and feedback. The dimensions map well to the team environment. Individual contributors who perform team tasks that are challenging, interesting, and engaging are more likely to be motivated to exert greater effort and perform better than team members who are working on tasks that lack those characteristics. Organizational support systems affect the team effectiveness and provide resources for teams operating in the multi-team environment. During the chartering of new teams, organizational enabling resources are first identified. Examples of enabling resources include facilities, equipment, information, training, and leadership. Team-specific resources (e.g., budgetary resources, human resources) are typically made available. Team-specific human resources represent the individual contributors who are selected to be team members. Intra-team processes (e.g., task design, task assignment) involve these team-specific resources. Teams also function in dynamic multi-team environments. Teams often must respond to shifting organizational contingencies. Organizational reward systems drive the strengthening and enhancing of individual team member efforts; such efforts contribute towards reaching team goals. In other words, rewards that are given to individual team members should be contingent upon the performance of the entire team. Several design elements are needed to enable organizational reward systems to operate successfully. First, for a collective assessment to be appropriate for individual team members, the group's tasks must be highly interdependent. If this is not the case, individual assessment is more appropriate than team assessment. Second, individual-level reward systems and team-level reward systems must be compatible. For example, it would be unfair to reward the entire team for a job well done if only one team member did most of the work. That team member would most likely view teams and teamwork negatively, and would not want to work on a team in the future. Third, an organizational culture must be created such that it supports and rewards employees who believe in the value of teamwork and who maintain a positive attitude towards team-based rewards. Goals potentially motivate team members when goals contain three elements: difficulty, acceptance, and specificity. Under difficult goal conditions, teams with more committed members tend to outperform teams with less committed members. When team members commit to team goals, team effectiveness is a function of how supportive members are with each other. The goals of individual team members and team goals interact. Team and individual goals must be coordinated. Individual goals must be consistent with team goals in order for a team to be effective. Job satisfaction is often thought to reflect the extent to which a worker likes his or her job, or individual aspects or facets of jobs. It is one of the most heavily researched topics in I-O psychology. Job satisfaction has theoretical and practical utility for the field. It has been linked to important job outcomes including absenteeism, accidents, counterproductive work behavior, customer service, cyberloafing, job performance, organizational citizenship behavior, physical and psychological health, and turnover. A meta-analyses found job satisfaction to be related to life satisfaction, happiness, positive affect, and the absence of negative affect. Productive behavior is defined as employee behavior that contributes positively to the goals and objectives of an organization. When an employee begins a new job, there is a transition period during which he or she may not contribute significantly. To assist with this transition an employee typically requires job-related training. In financial terms, productive behavior represents the point at which an organization begins to achieve some return on the investment it has made in a new employee. IO psychologists are ordinarily more focused on productive behavior than job or task performance, including in-role and extra-role performance. In-role performance tells managers how well an employee performs the required aspects of the job; extra-role performance includes behaviors not necessarily required by job but nonetheless contribute to organizational effectiveness. By taking both in-role and extra-role performance into account, an I-O psychologist is able to assess employees' effectiveness (how well they do what they were hired to do), efficiency (outputs to relative inputs), and productivity (how much they help the organization reach its goals). Three forms of productive behavior that IO psychologists often evaluate include job performance, organizational citizenship behavior (see below), and innovation. Job performance represents behaviors employees engage in while at work which contribute to organizational goals. These behaviors are formally evaluated by an organization as part of an employee's responsibilities. In order to understand and ultimately predict job performance, it is important to be precise when defining the term. Job performance is about behaviors that are within the control of the employee and not about results (effectiveness), the costs involved in achieving results (productivity), the results that can be achieved in a period of time (efficiency), or the value an organization places on a given level of performance, effectiveness, productivity or efficiency (utility). To model job performance, researchers have attempted to define a set of dimensions that are common to all jobs. Using a common set of dimensions provides a consistent basis for assessing performance and enables the comparison of performance across jobs. Performance is commonly broken into two major categories: in-role (technical aspects of a job) and extra-role (non-technical abilities such as communication skills and being a good team member). While this distinction in behavior has been challenged it is commonly made by both employees and management. A model of performance by Campbell breaks performance into in-role and extra-role categories. Campbell labeled job-specific task proficiency and non-job-specific task proficiency as in-role dimensions, while written and oral communication, demonstrating effort, maintaining personal discipline, facilitating peer and team performance, supervision and leadership and management and administration are labeled as extra-role dimensions. Murphy's model of job performance also broke job performance into in-role and extra-role categories. However, task-orientated behaviors composed the in-role category and the extra-role category included interpersonally-oriented behaviors, down-time behaviors and destructive and hazardous behaviors. However, it has been challenged as to whether the measurement of job performance is usually done through pencil/paper tests, job skills tests, on-site hands-on tests, off-site hands-on tests, high-fidelity simulations, symbolic simulations, task ratings and global ratings. These various tools are often used to evaluate performance on specific tasks and overall job performance. Van Dyne and LePine developed a measurement model in which overall job performance was evaluated using Campbell's in-role and extra-role categories. Here, in-role performance was reflected through how well "employees met their performance expectations and performed well at the tasks that made up the employees' job." Dimensions regarding how well the employee assists others with their work for the benefit of the group, if the employee voices new ideas for projects or changes to procedure and whether the employee attends functions that help the group composed the extra-role category. To assess job performance, reliable and valid measures must be established. While there are many sources of error with performance ratings, error can be reduced through rater training and through the use of behaviorally-anchored rating scales. Such scales can be used to clearly define the behaviors that constitute poor, average, and superior performance. Additional factors that complicate the measurement of job performance include the instability of job performance over time due to forces such as changing performance criteria, the structure of the job itself and the restriction of variation in individual performance by organizational forces. These factors include errors in job measurement techniques, acceptance and the justification of poor performance, and lack of importance of individual performance. The determinants of job performance consist of factors having to do with the individual worker as well as environmental factors in the workplace. According to Campbell's Model of The Determinants of Job Performance, job performance is a result of the interaction between declarative knowledge (knowledge of facts or things), procedural knowledge (knowledge of what needs to be done and how to do it), and motivation (reflective of an employee's choices regarding whether to expend effort, the level of effort to expend, and whether to persist with the level of effort chosen). The interplay between these factors show that an employee may, for example, have a low level of declarative knowledge, but may still have a high level of performance if the employee has high levels of procedural knowledge and motivation. Regardless of the job, three determinants stand out as predictors of performance: (1) general mental ability (especially for jobs higher in complexity); (2) job experience (although there is a law of diminishing returns); and (3) the personality trait of conscientiousness (people who are dependable and achievement-oriented, who plan well). These determinants appear to influence performance largely through the acquisition and usage of job knowledge and the motivation to do well. Further, an expanding area of research in job performance determinants includes emotional intelligence. Organizational citizenship behaviors (OCBs) are another form of workplace behavior that IO psychologists are involved with. OCBs tend to be beneficial to both the organization and other workers. Dennis Organ (1988) defines OCBs as "individual behavior that is discretionary, not directly or explicitly recognized by the formal reward system, and that in the aggregate promotes the effective functioning of the organization." Behaviors that qualify as OCBs can fall into one of the following five categories: altruism, courtesy, sportsmanship, conscientiousness, and civic virtue. OCBs have also been categorized in other ways too, for example, by their intended targets individuals, supervisors, and the organization as a whole. Other alternative ways of categorizing OCBs include "compulsory OCBs", which are engaged in owing to coercive persuasion or peer pressure rather than out of good will. The extent to which OCBs are voluntary has been the subject of some debate. Other research suggests that some employees perform OCBs to influence how they are viewed within the organization. While these behaviors are not formally part of the job description, performing them can influence performance appraisals. Researchers have advanced the view that employees engage in OCBs as a form of "impression management," a term coined by Erving Goffman. Goffman defined impression management as "the way in which the individual ... presents himself and his activity to others, the ways in which he guides and controls the impression they form of him, and the kinds of things he may and may not do while sustaining his performance before them. Some researchers have hypothesized that OCBs are not performed out of good will, positive affect, etc., but instead as a way of being noticed by others, including supervisors. Four qualities are generally linked to creative and innovative behaviour by individuals: At the organizational level, a study by Damanpour identified four specific characteristics that may predict innovation: Counterproductive work behavior (CWB) can be defined as employee behavior that goes against the goals of an organization. These behaviors can be intentional or unintentional and result from a wide range of underlying causes and motivations. Some CWBs have instrumental motivations (e.g., theft). It has been proposed that a person-by-environment interaction can be utilized to explain a variety of counterproductive behaviors. For instance, an employee who sabotages another employee's work may do so because of lax supervision (environment) and underlying psychopathology (person) that work in concert to result in the counterproductive behavior. There is evidence that an emotional response (e.g., anger) to job stress (e.g., unfair treatment) can motivate CWBs. The forms of counterproductive behavior with the most empirical examination are ineffective job performance, absenteeism, job turnover, and accidents. Less common but potentially more detrimental forms of counterproductive behavior have also been investigated including violence and sexual harassment. Leadership can be defined as a process of influencing others to agree on a shared purpose, and to work towards shared objectives. A distinction should be made between leadership and management. Managers process administrative tasks and organize work environments. Although leaders may be required to undertake managerial duties as well, leaders typically focus on inspiring followers and creating a shared organizational culture and values. Managers deal with complexity, while leaders deal with initiating and adapting to change. Managers undertake the tasks of planning, budgeting, organizing, staffing, controlling, and problem solving. In contrast, leaders undertake the tasks of setting a direction or vision, aligning people to shared goals, communicating, and motivating. Approaches to studying leadership can be broadly classified into three categories: Leader-focused approaches, contingency-focused approaches, and follower-focused approaches. Leader-focused approaches look to organizational leaders to determine the characteristics of effective leadership. According to the trait approach, more effective leaders possess certain traits that less effective leaders lack. More recently, this approach is being used to predict leader emergence. The following traits have been identified as those that predict leader emergence when there is no formal leader: high intelligence, high needs for dominance, high self-motivation, and socially perceptive. Another leader-focused approached is the behavioral approach, which focuses on the behaviors that distinguish effective from ineffective leaders. There are two categories of leadership behaviors: consideration and initiating structure. Behaviors associated with the category of consideration include showing subordinates they are valued and that the leader cares about them. An example of a consideration behavior is showing compassion when problems arise in or out of the office. Behaviors associated with the category of initiating structure include facilitating the task performance of groups. One example of an initiating structure behavior is meeting one-on-one with subordinates to explain expectations and goals. The final leader-focused approach is power and influence. To be most effective, a leader should be able to influence others to behave in ways that are in line with the organization's mission and goals. How influential a leader can be depends on their social power – their potential to influence their subordinates. There are six bases of power: French and Raven's classic five bases of coercive, reward, legitimate, expert, and referent power, plus informational power. A leader can use several different tactics to influence others within an organization. These include: rational persuasion, inspirational appeal, consultation, ingratiation, exchange, personal appeal, coalition, legitimating, and pressure. Of the 3 approaches to leadership, contingency-focused approaches have been the most prevalent over the past 30 years. Contingency-focused theories base a leader's effectiveness on their ability to assess a situation and adapt their behavior accordingly. These theories assume that an effective leader can accurately "read" a situation and skillfully employ a leadership style that meets the needs of the individuals involved and the task at hand. A brief introduction to the most prominent contingency-focused theories will follow. The Fiedler contingency model holds that a leader's effectiveness depends on the interaction between their characteristics and the characteristics of the situation. Path–goal theory asserts that the role of the leader is to help his or her subordinates achieve their goals. To effectively do this, leaders must skillfully select from four different leadership styles to meet the situational factors. The situational factors are a product of the characteristics of subordinates and the characteristics of the environment. The leader–member exchange theory (LMX) focuses on how leader–subordinate relationships develop. Generally speaking, when a subordinate performs well or when there are positive exchanges between a leader and a subordinate, their relationship is strengthened, performance and job satisfaction are enhanced, and the subordinate will feel more commitment to the leader and the organization as a whole. Vroom-Yetton-Jago model focuses on decision-making with respect to a feasibility set. I-O psychologists may also become involved with organizational change, a process which some call organizational development (OD). Tools used to advance organization development include the survey-feedback technique. The technique involves the periodic assessment (via surveys) of employee attitudes and feelings. The results are conveyed to organizational stakeholders, who may want to take the organization in a particular direction. Another tool is the team-building technique. Because many if not most tasks within an organization are completed by small groups and/or teams, team building can become important for organizational success. In order to enhance a team's morale and problem-solving skills, I-O psychologists help the groups to improve their self-confidence, group cohesiveness, and working effectiveness. An important topic in I-O is the connection between people’s working and nonworking lives. Two concepts are particularly relevant. Work-family conflict is the incompatibility between the job and family life. Conflict can occur when stressful experiences in one domain spillover into the other, such as someone coming home in a bad mood after having a difficult day at work. It can also occur when there are time conflicts, such as having a work meeting at the same time as a child’s doctor’s appointment. Work-family enrichment (also called work-family facilitation) occurs when one domain provides benefits to the other. For example, a spouse might assist with a work task or a supervisor might offer assistance with a family problem. I-O psychology and organizational behavior researchers have sometimes investigated similar topics. The overlap has led to some confusion regarding how the two disciplines differ. Sometimes there has been confusion within organizations regarding the practical duties of I-O psychologists and human resource management specialists. The minimum requirement for working as an IO psychologist is a master's degree. Normally, this degree requires about two to three years of postgraduate work to complete. Of all the degrees granted in IO psychology each year, approximately two-thirds are at the master's level. A comprehensive list of US and Canadian master's and doctoral programs can be found at the web site of the Society for Industrial and Organizational Psychology (SIOP). Admission into IO psychology PhD programs is highly competitive; many programs accept only a small number of applicants each year. There are graduate degree programs in IO psychology outside of the US and Canada. The SIOP web site lists some of them. In Australia, organisational psychologists must be accredited by the Australian Psychological Society (APS). To become an organisational psychologist, one must meet the criteria for a general psychologist's licence: three years studying bachelor's degree in psychology, 4th-year honours degree or postgraduate diploma in psychology, and two-year full-time supervised practice plus 80 hours of professional development. There are other avenues available, such as a two-year supervised training program after honours (i.e. 4+2 pathway), or one year of postgraduate coursework and practical placements followed by a one-year supervised training program (i.e. 5+1 pathway). After this, psychologists can elect to specialise as Organisational Psychologists in Australia. There are many different sets of competencies for different specializations within IO psychology and IO psychologists are versatile behavioral scientists. For example, an IO psychologist specializing in selection and recruiting should have expertise in finding the best talent for the organization and getting everyone on board while he or she might not need to know much about executive coaching. Some IO psychologists specialize in specific areas of consulting whereas others tend to generalize their areas of expertise. There are basic skills and knowledge an individual needs in order to be an effective IO psychologist, which include being an independent learner, interpersonal skills (e.g., listening skills), and general consultation skills (e.g., skills and knowledge in the problem area). U.S. News & World Report lists I-O Psychology as the third best science job, with a strong job market in the U.S. In the 2020 SIOP salary survey, the median annual salary for a PhD in IO psychology was $125,000; for a master's level IO psychologist was $88,900. The highest paid PhD IO psychologists were self-employed consultants who had a median annual income of $167,000. The highest paid in private industry worked in IT ($153,000), retail ($151,000) and healthcare ($147,000). The lowest earners were found in state and local government positions, averaging approximately $100,000, and in academic positions in colleges and universities that do not award doctoral degrees, with median salaries between $80,000 and $94,000. An IO psychologist, whether an academic, consultant or an employee of an organization, is expected to maintain high ethical standards. SIOP encourages its members to follow the APA Ethics Code. With more organizations becoming global, it is important that when an IO psychologist works outside her or his home country, the psychologist is aware of rules, regulations, and cultures of the organizations and countries in which the psychologist works, while also adhering to the ethical standards set at home.
[ { "paragraph_id": 0, "text": "Industrial and organizational psychology (I-O psychology) \"focuses the lens of psychological science on a key aspect of human life, namely, their work lives. In general, the goals of I-O psychology are to better understand and optimize the effectiveness, health, and well-being of both individuals and organizations.\" It is an applied discipline within psychology and is an international profession. I-O psychology is also known as occupational psychology in the United Kingdom, organisational psychology in Australia and New Zealand, and work and organizational (WO) psychology throughout Europe and Brazil. Industrial, work, and organizational (IWO) psychology is the broader, more global term for the science and profession.", "title": "" }, { "paragraph_id": 1, "text": "I-O psychologists are trained in the scientist–practitioner model. As an applied field, the discipline involves both research and practice and I-O psychologists apply psychological theories and principles to organizations and the individuals within them. They contribute to an organization's success by improving the job performance, wellbeing, motivation, job satisfaction and the health and safety of employees.", "title": "" }, { "paragraph_id": 2, "text": "An I-O psychologist conducts research on employee attitudes, behaviors, emotions, motivation, and stress. The field is concerned with how these things can be improved through recruitment processes, training programs, feedback, management systems and other interventions. I-O psychology research and practice also includes the work–nonwork interface such as selecting and transitioning into a new career, occupational burnout, unemployment, retirement, and work-family conflict and balance.", "title": "" }, { "paragraph_id": 3, "text": "I-O psychology is one of the 17 recognized professional specialties by the American Psychological Association (APA). In the United States the profession is represented by Division 14 of the APA and is formally known as the Society for Industrial and Organizational Psychology (SIOP). Similar I-O psychology societies can be found in many countries. In 2009 the Alliance for Organizational Psychology was formed and is a federation of Work, Industrial, & Organizational Psychology societies and \"network partners\" from around the world.", "title": "" }, { "paragraph_id": 4, "text": "I-O psychology is an international science and profession and depending on the region of the world, it is referred to by different names. In North America, Canada and South Africa the title \"I-O\" psychology is used; in the United Kingdom, the field is known as occupational psychology. Occupational psychology in the UK is one of nine \"protected titles\" within the \"practitioner psychologist\" professions. The profession is regulated by the Health and Care Professions Council. In the UK, graduate programs in psychology, including occupational psychology, are accredited by the British Psychological Society.", "title": "International" }, { "paragraph_id": 5, "text": "In Europe, someone with a specialist EuroPsy Certificate in Work and Organisational Psychology is a fully qualified psychologist and a specialist in the work psychology field. Industrial and organizational psychologists reaching the EuroPsy standard are recorded in the Register of European Psychologists. I-O psychology is one of the three main psychology specializations in Europe.", "title": "International" }, { "paragraph_id": 6, "text": "In Australia, the title \"organisational psychologist\" is protected by law and regulated by the Australian Health Practitioner Regulation Agency (AHPRA). Organizational psychology is one of nine areas of specialist endorsement for psychology practice in Australia.", "title": "International" }, { "paragraph_id": 7, "text": "In South Africa, industrial psychology is a registration category for the profession of psychologist as regulated by the Health Professions Council of South Africa (HPCSA).", "title": "International" }, { "paragraph_id": 8, "text": "In 2009 The Alliance for Organizational psychology was formed and is a federation of Work, Industrial, & Organizational Psychology societies and \"network partners\" from around the world. In 2021 The British Psychological Society (BPS) Division of Occupational Psychology (DOP) and the Australian Psychological Society's (APS) College of Organizational Psychology joined the Alliance. The Alliance currently has member organizations representing Industrial, Work and Organisational psychology and IWO psychologists from Australia, Britain, Brazil, Canada, Chile, Europe, Germany, Hong Kong, Japan, Netherlands, New Zealand, Singapore, South Africa and the United States.", "title": "International" }, { "paragraph_id": 9, "text": "The historical development of I-O psychology was paralleled in the US, the UK, Australia, Germany, the Netherlands, and Eastern European countries such as Romania. The roots of I-O psychology trace back to almost the beginning of psychology as a science, when Wilhelm Wundt founded one of the first psychological laboratories in 1879 in Leipzig, Germany. In the mid–1880s, Wundt trained two psychologists, Hugo Münsterberg and James McKeen Cattell, who went on to have a major influence on the emergence of I-O psychology. World War I was an impetus for the development of the field simultaneously in the UK and US. Munsterberg, one of the founders of I-O psychology, wrote, \"Our aim is to sketch the outlines of a new science which is intermediate between the modern laboratory psychology and the problems of economics: the psychological experiment is systematically to be placed at the service of commerce and industry\" (p. 3).", "title": "Historical overview" }, { "paragraph_id": 10, "text": "Instead of viewing performance differences as human \"errors,\" Cattell was one of the first to recognize the importance of differences among individuals as a way of better understanding work behavior. Walter Dill Scott, who was a contemporary of Cattell and was elected President of the American Psychological Association (APA) in 1919, was arguably the most prominent I-O psychologist of his time. Scott, along with Walter Van Dyke Bingham, worked at what was then Carnegie Institute of Technology, developing methods for selecting and training sales personnel.", "title": "Historical overview" }, { "paragraph_id": 11, "text": "The \"industrial\" side of I-O psychology originated in research on individual differences, assessment, and the prediction of work performance. Industrial psychology crystallized during World War I, in response to the need to rapidly assign new troops to duty. Scott and Bingham volunteered to help with the testing and placement of more than a million U.S. Army recruits. In 1917, together with other prominent psychologists, they adapted a well-known intelligence test the Stanford–Binet, which was designed for testing one individual at a time, to make it suitable for group testing. The new test was called the Army Alpha. After the War, increasing employment in the U.S. created opportunities for I-O psychology practitioners who called themselves \"industrial psychologists\"", "title": "Historical overview" }, { "paragraph_id": 12, "text": "The \"organizational\" side of the field was focused on employee behavior, feelings, and well-being. During World War I, with the U.K. government's interest in worker productivity in munitions factories, Charles Myers studied worker fatigue and well-being. Following the war, Elton Mayo found that rest periods improved morale and reduced turnover in a Philadelphia textile factory. He later joined the ongoing Hawthorne studies, where he became interested in how workers' emotions and informal relationships affected productivity. The results of these studies ushered in the human relations movement.", "title": "Historical overview" }, { "paragraph_id": 13, "text": "World War II brought renewed interest in ability testing. The U.S. military needed to accurately place recruits in new technologically advanced jobs. There was also concern with morale and fatigue in war-industry workers. In the 1960s Arthur Kornhauser examined the impact on productivity of hiring mentally unstable workers. Kornhauser also examined the link between industrial working conditions and worker mental health as well as the spillover into a worker's personal life of having an unsatisfying job. Zickar noted that most of Kornhauser's I-O contemporaries favored management and Kornhauser was largely alone in his interest in protecting workers. Vinchur and Koppes (2010) observed that I-O psychologists' interest in job stress is a relatively recent development (p. 22).", "title": "Historical overview" }, { "paragraph_id": 14, "text": "The industrial psychology division of the former American Association of Applied Psychology became a division within APA, becoming Division 14 of APA. It was initially called the Industrial and Business Psychology Division. In 1962, the name was changed to the Industrial Psychology Division. In 1973, it was renamed again, this time to the Division of Industrial and Organizational Psychology. In 1982, the unit become more independent of APA, and its name was changed again, this time to the Society for Industrial and Organizational Psychology.", "title": "Historical overview" }, { "paragraph_id": 15, "text": "The name change of the division from \"industrial psychology\" to \"industrial and organizational psychology\" reflected the shift in the work of industrial psychologists who had originally addressed work behavior from the individual perspective, examining performance and attitudes of individual workers. Their work became broader. Group behavior in the workplace became a worthy subject of study. The emphasis on the \"organizational\" underlined the fact that when an individual joins an organization (e.g., the organization that hired him or her), he or she will be exposed to a common goal and a common set of operating procedures. In the 1970s in the UK, references to occupational psychology became more common than references to I-O psychology.", "title": "Historical overview" }, { "paragraph_id": 16, "text": "According to Bryan and Vinchur, \"while organizational psychology increased in popularity through [the 1960s and 1970s], research and practice in the traditional areas of industrial psychology continued, primarily driven by employment legislation and case law\". There was a focus on fairness and validity in selection efforts as well as in the job analyses that undergirded selection instruments. For example, I-O psychology showed increased interest in behaviorally anchored rating scales. What critics there were of I-O psychology accused the discipline of being responsive only to the concerns of management.", "title": "Historical overview" }, { "paragraph_id": 17, "text": "From the 1980s to 2010s, other changes in I-O psychology took place. Researchers increasingly adopted a multi-level approach, attempting to understand behavioral phenomena from both the level of the organization and the level of the individual worker. There was also an increased interest in the needs and expectations of employees as individuals. For example, an emphasis on organizational justice and the psychological contract took root, as well as the more traditional concerns of selection and training. Methodological innovations (e.g., meta-analyses, structural equation modeling) were adopted. With the passage of the American with Disabilities Act in 1990 and parallel legislation elsewhere in the world, I-O psychology saw an increased emphasis on \"fairness in personnel decisions.\" Training research relied increasingly on advances in educational psychology and cognitive science.", "title": "Historical overview" }, { "paragraph_id": 18, "text": "I-O researchers employ both qualitative and quantitative methods, although quantitative methods are far more common. Basic Quantitative methods used in I-O psychology include correlation, multiple regression, and analysis of variance. More advanced statistical methods include logistic regression, structural equation modeling, and hierarchical linear modeling (HLM; also known as multilevel modeling). I-O researchers have also employed meta-analysis. I-O psychologists also employ psychometric methods including methods associated with classical test theory, generalizability theory, and item response theory (IRT).", "title": "Research methods" }, { "paragraph_id": 19, "text": "I-O psychologists have also employed qualitative methods, which largely involve focus groups, interviews, and case studies. I-O psychologists conducting research on organizational culture have employed ethnographic techniques and participant observation. A qualitative technique associated with I-O psychology is Flanagan's critical incident technique. I-O psychologists have also coordinated the use of quantitative and qualitative methods in the same study,", "title": "Research methods" }, { "paragraph_id": 20, "text": "I-O psychologists deal with a wide range of topics concerning people in the workplace.", "title": "Topics" }, { "paragraph_id": 21, "text": "Job analysis encompasses a number of different methods including, but not limited to, interviews, questionnaires, task analysis, and observation. A job analysis primarily involves the systematic collection of information about a job. A task-oriented job analysis involves an assessment of the duties, tasks, and/or competencies a job requires. By contrast, a worker-oriented job analysis involves an examination of the knowledge, skills, abilities, and other characteristics (KSAOs) required to successfully perform the work. Information obtained from job analyses are used for many purposes, including the creation job-relevant selection procedures, the development of criteria for performance appraisals, the conducting of performance appraisals, and the development and implementation of training programs.", "title": "Topics" }, { "paragraph_id": 22, "text": "I-O psychologists design (a) recruitment processes and (b) personnel selection systems. Personnel recruitment is the process of identifying qualified candidates in the workforce and getting them to apply for jobs within an organization. Personnel recruitment processes include developing job announcements, placing ads, defining key qualifications for applicants, and screening out unqualified applicants.", "title": "Topics" }, { "paragraph_id": 23, "text": "Personnel selection is the systematic process of hiring and promoting personnel. Personnel selection systems employing I-O methods use quantitative data to determine the most qualified candidates. This can involve the use of psychological tests, Biographical Information Blanks, interviews, work samples, and assessment centers.", "title": "Topics" }, { "paragraph_id": 24, "text": "Personnel selection procedures are usually validated, i.e., shown to be job relevant to personnel selection, using one or more of the following types of validity: content validity, construct validity, and/or criterion-related validity. I-O psychologists must adhere to professional standards in personnel selection efforts. SIOP (e.g., Principles for validation and use of personnel selection procedures) and APA together with the National Council on Measurement in Education (e.g., Standards for educational and psychological testing are sources of those standards. The Equal Employment Opportunity Commission's Uniform guidelines are also influential in guiding personnel selection decisions.", "title": "Topics" }, { "paragraph_id": 25, "text": "A meta-analysis of selection methods found that general mental ability (g factor) was the best overall predictor of job performance and attainment in training.", "title": "Topics" }, { "paragraph_id": 26, "text": "Performance appraisal or performance evaluation is the process in which an individual's or a group's work behaviors and outcomes are assessed against managers' and others' expectations for the job. Performance appraisal is used for a variety of purposes including the basis for employment decisions (promotion, raises and termination), feedback to employees, and training needs assessment. Performance management is the process of providing performance feedback relative to expectations and information relevant to helping a worker improve his or her performance (e.g., coaching, mentoring). Performance management may also include documenting and tracking performance information for organizational evaluation purposes.", "title": "Topics" }, { "paragraph_id": 27, "text": "Individual assessment involves the measurement of individual differences. I-O psychologists perform individual assessments in order to evaluate differences among candidates for employment as well as differences among employees. The constructs measured pertain to job performance. With candidates for employment, individual assessment is often part of the personnel selection process. These assessments can include written tests, aptitude tests, physical tests, psycho-motor tests, personality tests, integrity and reliability tests, work samples, simulations, and assessment centres.", "title": "Topics" }, { "paragraph_id": 28, "text": "A more recent focus of I-O field is the health, safety, and well-being of employees. Topics include occupational safety, occupational stress, and workplace bullying, aggression and violence.", "title": "Topics" }, { "paragraph_id": 29, "text": "There are many features of work that can be stressful to employees. Research has identified a number of job stressors (environmental conditions at work) that contribute to strains (adverse behavioral, emotional, physical, and psychological reactions). Occupational stress can have implications for organizational performance because of the emotions job stress evokes. For example, a job stressor such as conflict with a supervisor can precipitate anger that in turn motivates counterproductive workplace behaviors. A number of prominent models of job stress have been developed to explain the job stress process, including the person-environment (P-E) fit model, which was developed by University of Michigan social psychologists, and the demand-control(-support) and effort-reward imbalance models, which were developed by sociologists.", "title": "Topics" }, { "paragraph_id": 30, "text": "Research has also examined occupational stress in specific occupations, including police, general practitioners, and dentists. Another concern has been the relation of occupational stress to family life. Other I-O researchers have examined gender differences in leadership style and job stress and strain in the context of male- and female-dominated industries, and unemployment-related distress. Occupational stress has also been linked to lack of fit between people and their jobs.", "title": "Topics" }, { "paragraph_id": 31, "text": "Accidents and safety in the workplace are important because of the serious injuries and fatalities that are all too common. Research has linked accidents to psychosocial factors in the workplace including overwork that leads to fatigue, workplace violence, and working night shifts. \"Stress audits\" can help organizations remain compliant with various occupational safety regulations. Psychosocial hazards can affect musculoskeletal disorders. A psychosocial factor related to accident risk is safety climate, which refers to employees' perceptions of the extent to which their work organization prioritizes safety. By contrast, psychosocial safety climate refers to management's \"policies, practices, and procedures\" aimed at protecting workers' psychological health. Research on safety leadership is also relevant to understanding employee safety performance. Research suggests that safety-oriented transformational leadership is associated with a positive safety climate and safe worker practices.", "title": "Topics" }, { "paragraph_id": 32, "text": "I-O psychologists are concerned with the related topics of workplace bullying, aggression, and violence. For example, I-O research found that exposure to workplace violence elicited ruminative thinking. Ruminative thinking is associated with poor well-being. Research has found that interpersonal aggressive behaviour is associated with worse team performance.", "title": "Topics" }, { "paragraph_id": 33, "text": "A new discipline, occupational health psychology (OHP), emerged from both health psychology and I-O psychology as well as occupational medicine. OHP concerns itself with such topic areas as the impact of occupational stressors on mental and physical health, the health impact of involuntary unemployment, violence and bullying in the workplace, psychosocial factors that influence accident risk and safety, work-family balance, and interventions designed to improve/protect worker health. Spector observed that one of the problems facing I-O psychologists in the late 20 century who were interested in the health of working people was resistance within the field to publishing papers on worker health. In the 21 century, OHP topics have become popular at the Society for Industrial and Organizational Psychology conference.", "title": "Topics" }, { "paragraph_id": 34, "text": "Work design concerns the \"content and organisation of one's work tasks, activities, relationships, and responsibilities.\" Research has demonstrated that work design has important implications for individual employees (e.g., level of engagement, job strain, chance of injury), teams (e.g., how effectively teams co-ordinate their activities), organisations (e.g., productivity, safety, efficiency targets), and society (e.g., whether a nation utilises the skills of its population or promotes effective aging).", "title": "Topics" }, { "paragraph_id": 35, "text": "I-O psychologists review job tasks, relationships, and an individual's way of thinking about their work to ensure that their roles are meaningful and motivating, thus creating greater productivity and job satisfaction. Deliberate interventions aimed at altering work design are sometimes referred to as work redesign. Such interventions can be initiated by the management of an organization (e.g., job rotation, job enlargement, job enrichment) or by individual workers (e.g., job crafting, role innovation, idiosyncratic ideals).", "title": "Topics" }, { "paragraph_id": 36, "text": "Training involves the systematic teaching of skills, concepts, or attitudes that results in improved performance in another environment. Because many people hired for a job are not already versed in all the tasks the job requires, training may be needed to help the individual perform the job effectively. Evidence indicates that training is often effective, and that it succeeds in terms of higher net sales and gross profitability per employee.", "title": "Topics" }, { "paragraph_id": 37, "text": "Similar to performance management (see above), an I-O psychologist would employ a job analysis in concert with the application of the principles of instructional design to create an effective training program. A training program is likely to include a summative evaluation at its conclusion in order to ensure that trainees have met the training objectives and can perform the target work tasks at an acceptable level. Kirkpatrick describes four levels of criteria by which to evaluate training:", "title": "Topics" }, { "paragraph_id": 38, "text": "Training programs often include formative evaluations to assess the effect of the training as the training proceeds. Formative evaluations can be used to locate problems in training procedures and help I-O psychologists make corrective adjustments while training is ongoing.", "title": "Topics" }, { "paragraph_id": 39, "text": "The foundation for training programs is learning. Learning outcomes can be organized into three broad categories: cognitive, skill-based, and affective outcomes. Cognitive training is aimed at instilling declarative knowledge or the knowledge of rules, facts, and principles (e.g., police officer training covers laws and court procedures). Skill-based training aims to impart procedural knowledge (e.g., skills needed to use a special tool) or technical skills (e.g., understanding the workings of software program). Affective training concerns teaching individuals to develop specific attitudes or beliefs that predispose trainees to behave a certain way (e.g., show commitment to the organization, appreciate diversity).", "title": "Topics" }, { "paragraph_id": 40, "text": "A needs assessment, an analysis of corporate and individual goals, is often undertaken prior to the development of a training program. In addition, a careful training needs analysis is required in order to develop a systematic understanding of where training is needed, what should be taught, and who will be trained. A training needs analysis typically involves a three-step process that includes organizational analysis, task analysis, and person analysis.", "title": "Topics" }, { "paragraph_id": 41, "text": "An organizational analysis is an examination of organizational goals and resources as well as the organizational environment. The results of an organizational analysis help to determine where training should be directed. The analysis identifies the training needs of different departments or subunits. It systematically assesses manager, peer, and technological support for transfer of training. An organizational analysis also takes into account the climate of the organization and its subunits. For example, if a climate for safety is emphasized throughout the organization or in subunits of the organization (e.g., production), then training needs will likely reflect an emphasis on safety. A task analysis uses the results of a job analysis to determine what is needed for successful job performance, contributing to training content. With organizations increasingly trying to identify \"core competencies\" that are required for all jobs, task analysis can also include an assessment of competencies. A person analysis identifies which individuals within an organization should receive training and what kind of instruction they need. Employee needs can be assessed using a variety of methods that identify weaknesses that training can address.", "title": "Topics" }, { "paragraph_id": 42, "text": "Work motivation reflects the energy an individual applies \"to initiate work-related behavior, and to determine its form, direction, intensity, and duration\" Understanding what motivates an organization's employees is central to I-O psychology. Motivation is generally thought of as a theoretical construct that fuels behavior. An incentive is an anticipated reward that is thought to incline a person to behave a certain way. Motivation varies among individuals. Studying its influence on behavior, it must be examined together with ability and environmental influences. Because of motivation's role in influencing workplace behavior and performance, many organizations structure the work environment to encourage productive behaviors and discourage unproductive behaviors.", "title": "Topics" }, { "paragraph_id": 43, "text": "Motivation involves three psychological processes: arousal, direction, and intensity. Arousal is what initiates action. It is often fueled by a person's need or desire for something that is missing from his or her life, either totally or partially. Direction refers to the path employees take in accomplishing the goals they set for themselves. Intensity is the amount of energy employees put into goal-directed work performance. The level of intensity often reflects the importance and difficulty of the goal. These psychological processes involve four factors. First, motivation serves to direct attention, focusing on particular issues, people, tasks, etc. Second, it serves to stimulate effort. Third, motivation influences persistence. Finally, motivation influences the choice and application of task-related strategies.", "title": "Topics" }, { "paragraph_id": 44, "text": "Organizational climate is the perceptions of employees about what is important in an organization, that is, what behaviors are encouraged versus discouraged. It can be assessed in individual employees (climate perceptions) or averaged across groups of employees within a department or organization (organizational climate). Climates are usually focused on specific employee outcomes, or what is called “climate for something”. There are more than a dozen types of climates that have been assessed and studied. Some of the more popular include:", "title": "Topics" }, { "paragraph_id": 45, "text": "Climate concerns organizational policies and practices that encourage or discourage specific behaviors by employees. Shared perceptions of what the organization emphasizes (organizational climate) is part of organizational culture, but culture concerns far more than shared perceptions, as discussed in the next section.", "title": "Topics" }, { "paragraph_id": 46, "text": "While there is no universal definition for organizational culture, a collective understanding shares the following assumptions:", "title": "Topics" }, { "paragraph_id": 47, "text": "... that they are related to history and tradition, have some depth, are difficult to grasp and account for, and must be interpreted; that they are collective and shared by members of groups and primarily ideational in character, having to do with values, understandings, beliefs, knowledge, and other intangibles; and that they are holistic and subjective rather than strictly rational and analytical.", "title": "Topics" }, { "paragraph_id": 48, "text": "Organizational culture has been shown to affect important organizational outcomes such as performance, attraction, recruitment, retention, employee satisfaction, and employee well-being. There are three levels of organizational culture: artifacts, shared values, and basic beliefs and assumptions. Artifacts comprise the physical components of the organization that relay cultural meaning. Shared values are individuals' preferences regarding certain aspects of the organization's culture (e.g., loyalty, customer service). Basic beliefs and assumptions include individuals' impressions about the trustworthiness and supportiveness of an organization, and are often deeply ingrained within the organization's culture.", "title": "Topics" }, { "paragraph_id": 49, "text": "In addition to an overall culture, organizations also have subcultures. Subcultures can be departmental (e.g. different work units) or defined by geographical distinction. While there is no single \"type\" of organizational culture, some researchers have developed models to describe different organizational cultures.", "title": "Topics" }, { "paragraph_id": 50, "text": "Group behavior involves the interactions among individuals in a collective. Most I-O group research is about teams which is a group in which people work together to achieve the same task goals. The individuals' opinions, attitudes, and adaptations affect group behavior, with group behavior in turn affecting those opinions, etc. The interactions are thought to fulfill some need satisfaction in an individual who is part of the collective.", "title": "Topics" }, { "paragraph_id": 51, "text": "Organizations often organize teams because teams can accomplish a much greater amount of work in a short period of time than an individual can accomplish. I-O research has examined the harm workplace aggression does to team performance.", "title": "Topics" }, { "paragraph_id": 52, "text": "Team composition, or the configuration of team member knowledge, skills, abilities, and other characteristics, fundamentally influences teamwork. Team composition can be considered in the selection and management of teams to increase the likelihood of team success. To achieve high-quality results, teams built with members having higher skill levels are more likely to be effective than teams built around members having lesser skills; teams that include a members with a diversity of skills are also likely to show improved team performance. Team members should also be compatible in terms of personality traits, values, and work styles. There is substantial evidence that personality traits and values can shape the nature of teamwork, and influence team performance.", "title": "Topics" }, { "paragraph_id": 53, "text": "A fundamental question in team task design is whether or not a task is even appropriate for a team. Those tasks that require predominantly independent work are best left to individuals, and team tasks should include those tasks that consist primarily of interdependent work. When a given task is appropriate for a team, task design can play a key role in team effectiveness.", "title": "Topics" }, { "paragraph_id": 54, "text": "Job characteristic theory identifies core job dimensions that affect motivation, satisfaction, performance, etc. These dimensions include skill variety, task identity, task significance, autonomy and feedback. The dimensions map well to the team environment. Individual contributors who perform team tasks that are challenging, interesting, and engaging are more likely to be motivated to exert greater effort and perform better than team members who are working on tasks that lack those characteristics.", "title": "Topics" }, { "paragraph_id": 55, "text": "Organizational support systems affect the team effectiveness and provide resources for teams operating in the multi-team environment. During the chartering of new teams, organizational enabling resources are first identified. Examples of enabling resources include facilities, equipment, information, training, and leadership. Team-specific resources (e.g., budgetary resources, human resources) are typically made available. Team-specific human resources represent the individual contributors who are selected to be team members. Intra-team processes (e.g., task design, task assignment) involve these team-specific resources. Teams also function in dynamic multi-team environments. Teams often must respond to shifting organizational contingencies.", "title": "Topics" }, { "paragraph_id": 56, "text": "Organizational reward systems drive the strengthening and enhancing of individual team member efforts; such efforts contribute towards reaching team goals. In other words, rewards that are given to individual team members should be contingent upon the performance of the entire team.", "title": "Topics" }, { "paragraph_id": 57, "text": "Several design elements are needed to enable organizational reward systems to operate successfully. First, for a collective assessment to be appropriate for individual team members, the group's tasks must be highly interdependent. If this is not the case, individual assessment is more appropriate than team assessment. Second, individual-level reward systems and team-level reward systems must be compatible. For example, it would be unfair to reward the entire team for a job well done if only one team member did most of the work. That team member would most likely view teams and teamwork negatively, and would not want to work on a team in the future. Third, an organizational culture must be created such that it supports and rewards employees who believe in the value of teamwork and who maintain a positive attitude towards team-based rewards.", "title": "Topics" }, { "paragraph_id": 58, "text": "Goals potentially motivate team members when goals contain three elements: difficulty, acceptance, and specificity. Under difficult goal conditions, teams with more committed members tend to outperform teams with less committed members. When team members commit to team goals, team effectiveness is a function of how supportive members are with each other. The goals of individual team members and team goals interact. Team and individual goals must be coordinated. Individual goals must be consistent with team goals in order for a team to be effective.", "title": "Topics" }, { "paragraph_id": 59, "text": "Job satisfaction is often thought to reflect the extent to which a worker likes his or her job, or individual aspects or facets of jobs. It is one of the most heavily researched topics in I-O psychology. Job satisfaction has theoretical and practical utility for the field. It has been linked to important job outcomes including absenteeism, accidents, counterproductive work behavior, customer service, cyberloafing, job performance, organizational citizenship behavior, physical and psychological health, and turnover. A meta-analyses found job satisfaction to be related to life satisfaction, happiness, positive affect, and the absence of negative affect.", "title": "Topics" }, { "paragraph_id": 60, "text": "Productive behavior is defined as employee behavior that contributes positively to the goals and objectives of an organization. When an employee begins a new job, there is a transition period during which he or she may not contribute significantly. To assist with this transition an employee typically requires job-related training. In financial terms, productive behavior represents the point at which an organization begins to achieve some return on the investment it has made in a new employee. IO psychologists are ordinarily more focused on productive behavior than job or task performance, including in-role and extra-role performance. In-role performance tells managers how well an employee performs the required aspects of the job; extra-role performance includes behaviors not necessarily required by job but nonetheless contribute to organizational effectiveness. By taking both in-role and extra-role performance into account, an I-O psychologist is able to assess employees' effectiveness (how well they do what they were hired to do), efficiency (outputs to relative inputs), and productivity (how much they help the organization reach its goals). Three forms of productive behavior that IO psychologists often evaluate include job performance, organizational citizenship behavior (see below), and innovation.", "title": "Topics" }, { "paragraph_id": 61, "text": "Job performance represents behaviors employees engage in while at work which contribute to organizational goals. These behaviors are formally evaluated by an organization as part of an employee's responsibilities. In order to understand and ultimately predict job performance, it is important to be precise when defining the term. Job performance is about behaviors that are within the control of the employee and not about results (effectiveness), the costs involved in achieving results (productivity), the results that can be achieved in a period of time (efficiency), or the value an organization places on a given level of performance, effectiveness, productivity or efficiency (utility).", "title": "Topics" }, { "paragraph_id": 62, "text": "To model job performance, researchers have attempted to define a set of dimensions that are common to all jobs. Using a common set of dimensions provides a consistent basis for assessing performance and enables the comparison of performance across jobs. Performance is commonly broken into two major categories: in-role (technical aspects of a job) and extra-role (non-technical abilities such as communication skills and being a good team member). While this distinction in behavior has been challenged it is commonly made by both employees and management. A model of performance by Campbell breaks performance into in-role and extra-role categories. Campbell labeled job-specific task proficiency and non-job-specific task proficiency as in-role dimensions, while written and oral communication, demonstrating effort, maintaining personal discipline, facilitating peer and team performance, supervision and leadership and management and administration are labeled as extra-role dimensions. Murphy's model of job performance also broke job performance into in-role and extra-role categories. However, task-orientated behaviors composed the in-role category and the extra-role category included interpersonally-oriented behaviors, down-time behaviors and destructive and hazardous behaviors. However, it has been challenged as to whether the measurement of job performance is usually done through pencil/paper tests, job skills tests, on-site hands-on tests, off-site hands-on tests, high-fidelity simulations, symbolic simulations, task ratings and global ratings. These various tools are often used to evaluate performance on specific tasks and overall job performance. Van Dyne and LePine developed a measurement model in which overall job performance was evaluated using Campbell's in-role and extra-role categories. Here, in-role performance was reflected through how well \"employees met their performance expectations and performed well at the tasks that made up the employees' job.\" Dimensions regarding how well the employee assists others with their work for the benefit of the group, if the employee voices new ideas for projects or changes to procedure and whether the employee attends functions that help the group composed the extra-role category.", "title": "Topics" }, { "paragraph_id": 63, "text": "To assess job performance, reliable and valid measures must be established. While there are many sources of error with performance ratings, error can be reduced through rater training and through the use of behaviorally-anchored rating scales. Such scales can be used to clearly define the behaviors that constitute poor, average, and superior performance. Additional factors that complicate the measurement of job performance include the instability of job performance over time due to forces such as changing performance criteria, the structure of the job itself and the restriction of variation in individual performance by organizational forces. These factors include errors in job measurement techniques, acceptance and the justification of poor performance, and lack of importance of individual performance.", "title": "Topics" }, { "paragraph_id": 64, "text": "The determinants of job performance consist of factors having to do with the individual worker as well as environmental factors in the workplace. According to Campbell's Model of The Determinants of Job Performance, job performance is a result of the interaction between declarative knowledge (knowledge of facts or things), procedural knowledge (knowledge of what needs to be done and how to do it), and motivation (reflective of an employee's choices regarding whether to expend effort, the level of effort to expend, and whether to persist with the level of effort chosen). The interplay between these factors show that an employee may, for example, have a low level of declarative knowledge, but may still have a high level of performance if the employee has high levels of procedural knowledge and motivation.", "title": "Topics" }, { "paragraph_id": 65, "text": "Regardless of the job, three determinants stand out as predictors of performance: (1) general mental ability (especially for jobs higher in complexity); (2) job experience (although there is a law of diminishing returns); and (3) the personality trait of conscientiousness (people who are dependable and achievement-oriented, who plan well). These determinants appear to influence performance largely through the acquisition and usage of job knowledge and the motivation to do well. Further, an expanding area of research in job performance determinants includes emotional intelligence.", "title": "Topics" }, { "paragraph_id": 66, "text": "Organizational citizenship behaviors (OCBs) are another form of workplace behavior that IO psychologists are involved with. OCBs tend to be beneficial to both the organization and other workers. Dennis Organ (1988) defines OCBs as \"individual behavior that is discretionary, not directly or explicitly recognized by the formal reward system, and that in the aggregate promotes the effective functioning of the organization.\" Behaviors that qualify as OCBs can fall into one of the following five categories: altruism, courtesy, sportsmanship, conscientiousness, and civic virtue. OCBs have also been categorized in other ways too, for example, by their intended targets individuals, supervisors, and the organization as a whole. Other alternative ways of categorizing OCBs include \"compulsory OCBs\", which are engaged in owing to coercive persuasion or peer pressure rather than out of good will. The extent to which OCBs are voluntary has been the subject of some debate.", "title": "Topics" }, { "paragraph_id": 67, "text": "Other research suggests that some employees perform OCBs to influence how they are viewed within the organization. While these behaviors are not formally part of the job description, performing them can influence performance appraisals. Researchers have advanced the view that employees engage in OCBs as a form of \"impression management,\" a term coined by Erving Goffman. Goffman defined impression management as \"the way in which the individual ... presents himself and his activity to others, the ways in which he guides and controls the impression they form of him, and the kinds of things he may and may not do while sustaining his performance before them. Some researchers have hypothesized that OCBs are not performed out of good will, positive affect, etc., but instead as a way of being noticed by others, including supervisors.", "title": "Topics" }, { "paragraph_id": 68, "text": "Four qualities are generally linked to creative and innovative behaviour by individuals:", "title": "Topics" }, { "paragraph_id": 69, "text": "At the organizational level, a study by Damanpour identified four specific characteristics that may predict innovation:", "title": "Topics" }, { "paragraph_id": 70, "text": "Counterproductive work behavior (CWB) can be defined as employee behavior that goes against the goals of an organization. These behaviors can be intentional or unintentional and result from a wide range of underlying causes and motivations. Some CWBs have instrumental motivations (e.g., theft). It has been proposed that a person-by-environment interaction can be utilized to explain a variety of counterproductive behaviors. For instance, an employee who sabotages another employee's work may do so because of lax supervision (environment) and underlying psychopathology (person) that work in concert to result in the counterproductive behavior. There is evidence that an emotional response (e.g., anger) to job stress (e.g., unfair treatment) can motivate CWBs.", "title": "Topics" }, { "paragraph_id": 71, "text": "The forms of counterproductive behavior with the most empirical examination are ineffective job performance, absenteeism, job turnover, and accidents. Less common but potentially more detrimental forms of counterproductive behavior have also been investigated including violence and sexual harassment.", "title": "Topics" }, { "paragraph_id": 72, "text": "Leadership can be defined as a process of influencing others to agree on a shared purpose, and to work towards shared objectives. A distinction should be made between leadership and management. Managers process administrative tasks and organize work environments. Although leaders may be required to undertake managerial duties as well, leaders typically focus on inspiring followers and creating a shared organizational culture and values. Managers deal with complexity, while leaders deal with initiating and adapting to change. Managers undertake the tasks of planning, budgeting, organizing, staffing, controlling, and problem solving. In contrast, leaders undertake the tasks of setting a direction or vision, aligning people to shared goals, communicating, and motivating.", "title": "Topics" }, { "paragraph_id": 73, "text": "Approaches to studying leadership can be broadly classified into three categories: Leader-focused approaches, contingency-focused approaches, and follower-focused approaches.", "title": "Topics" }, { "paragraph_id": 74, "text": "Leader-focused approaches look to organizational leaders to determine the characteristics of effective leadership. According to the trait approach, more effective leaders possess certain traits that less effective leaders lack. More recently, this approach is being used to predict leader emergence. The following traits have been identified as those that predict leader emergence when there is no formal leader: high intelligence, high needs for dominance, high self-motivation, and socially perceptive. Another leader-focused approached is the behavioral approach, which focuses on the behaviors that distinguish effective from ineffective leaders. There are two categories of leadership behaviors: consideration and initiating structure. Behaviors associated with the category of consideration include showing subordinates they are valued and that the leader cares about them. An example of a consideration behavior is showing compassion when problems arise in or out of the office. Behaviors associated with the category of initiating structure include facilitating the task performance of groups. One example of an initiating structure behavior is meeting one-on-one with subordinates to explain expectations and goals. The final leader-focused approach is power and influence. To be most effective, a leader should be able to influence others to behave in ways that are in line with the organization's mission and goals. How influential a leader can be depends on their social power – their potential to influence their subordinates. There are six bases of power: French and Raven's classic five bases of coercive, reward, legitimate, expert, and referent power, plus informational power. A leader can use several different tactics to influence others within an organization. These include: rational persuasion, inspirational appeal, consultation, ingratiation, exchange, personal appeal, coalition, legitimating, and pressure.", "title": "Topics" }, { "paragraph_id": 75, "text": "Of the 3 approaches to leadership, contingency-focused approaches have been the most prevalent over the past 30 years. Contingency-focused theories base a leader's effectiveness on their ability to assess a situation and adapt their behavior accordingly. These theories assume that an effective leader can accurately \"read\" a situation and skillfully employ a leadership style that meets the needs of the individuals involved and the task at hand. A brief introduction to the most prominent contingency-focused theories will follow.", "title": "Topics" }, { "paragraph_id": 76, "text": "The Fiedler contingency model holds that a leader's effectiveness depends on the interaction between their characteristics and the characteristics of the situation. Path–goal theory asserts that the role of the leader is to help his or her subordinates achieve their goals. To effectively do this, leaders must skillfully select from four different leadership styles to meet the situational factors. The situational factors are a product of the characteristics of subordinates and the characteristics of the environment. The leader–member exchange theory (LMX) focuses on how leader–subordinate relationships develop. Generally speaking, when a subordinate performs well or when there are positive exchanges between a leader and a subordinate, their relationship is strengthened, performance and job satisfaction are enhanced, and the subordinate will feel more commitment to the leader and the organization as a whole. Vroom-Yetton-Jago model focuses on decision-making with respect to a feasibility set.", "title": "Topics" }, { "paragraph_id": 77, "text": "I-O psychologists may also become involved with organizational change, a process which some call organizational development (OD). Tools used to advance organization development include the survey-feedback technique. The technique involves the periodic assessment (via surveys) of employee attitudes and feelings. The results are conveyed to organizational stakeholders, who may want to take the organization in a particular direction. Another tool is the team-building technique. Because many if not most tasks within an organization are completed by small groups and/or teams, team building can become important for organizational success. In order to enhance a team's morale and problem-solving skills, I-O psychologists help the groups to improve their self-confidence, group cohesiveness, and working effectiveness.", "title": "Topics" }, { "paragraph_id": 78, "text": "An important topic in I-O is the connection between people’s working and nonworking lives. Two concepts are particularly relevant. Work-family conflict is the incompatibility between the job and family life. Conflict can occur when stressful experiences in one domain spillover into the other, such as someone coming home in a bad mood after having a difficult day at work. It can also occur when there are time conflicts, such as having a work meeting at the same time as a child’s doctor’s appointment.", "title": "Topics" }, { "paragraph_id": 79, "text": "Work-family enrichment (also called work-family facilitation) occurs when one domain provides benefits to the other. For example, a spouse might assist with a work task or a supervisor might offer assistance with a family problem.", "title": "Topics" }, { "paragraph_id": 80, "text": "I-O psychology and organizational behavior researchers have sometimes investigated similar topics. The overlap has led to some confusion regarding how the two disciplines differ. Sometimes there has been confusion within organizations regarding the practical duties of I-O psychologists and human resource management specialists.", "title": "Topics" }, { "paragraph_id": 81, "text": "The minimum requirement for working as an IO psychologist is a master's degree. Normally, this degree requires about two to three years of postgraduate work to complete. Of all the degrees granted in IO psychology each year, approximately two-thirds are at the master's level.", "title": "I-O Psychology As An Occupation" }, { "paragraph_id": 82, "text": "A comprehensive list of US and Canadian master's and doctoral programs can be found at the web site of the Society for Industrial and Organizational Psychology (SIOP). Admission into IO psychology PhD programs is highly competitive; many programs accept only a small number of applicants each year.", "title": "I-O Psychology As An Occupation" }, { "paragraph_id": 83, "text": "There are graduate degree programs in IO psychology outside of the US and Canada. The SIOP web site lists some of them.", "title": "I-O Psychology As An Occupation" }, { "paragraph_id": 84, "text": "In Australia, organisational psychologists must be accredited by the Australian Psychological Society (APS). To become an organisational psychologist, one must meet the criteria for a general psychologist's licence: three years studying bachelor's degree in psychology, 4th-year honours degree or postgraduate diploma in psychology, and two-year full-time supervised practice plus 80 hours of professional development. There are other avenues available, such as a two-year supervised training program after honours (i.e. 4+2 pathway), or one year of postgraduate coursework and practical placements followed by a one-year supervised training program (i.e. 5+1 pathway). After this, psychologists can elect to specialise as Organisational Psychologists in Australia.", "title": "I-O Psychology As An Occupation" }, { "paragraph_id": 85, "text": "There are many different sets of competencies for different specializations within IO psychology and IO psychologists are versatile behavioral scientists. For example, an IO psychologist specializing in selection and recruiting should have expertise in finding the best talent for the organization and getting everyone on board while he or she might not need to know much about executive coaching. Some IO psychologists specialize in specific areas of consulting whereas others tend to generalize their areas of expertise. There are basic skills and knowledge an individual needs in order to be an effective IO psychologist, which include being an independent learner, interpersonal skills (e.g., listening skills), and general consultation skills (e.g., skills and knowledge in the problem area).", "title": "I-O Psychology As An Occupation" }, { "paragraph_id": 86, "text": "U.S. News & World Report lists I-O Psychology as the third best science job, with a strong job market in the U.S. In the 2020 SIOP salary survey, the median annual salary for a PhD in IO psychology was $125,000; for a master's level IO psychologist was $88,900. The highest paid PhD IO psychologists were self-employed consultants who had a median annual income of $167,000. The highest paid in private industry worked in IT ($153,000), retail ($151,000) and healthcare ($147,000). The lowest earners were found in state and local government positions, averaging approximately $100,000, and in academic positions in colleges and universities that do not award doctoral degrees, with median salaries between $80,000 and $94,000.", "title": "I-O Psychology As An Occupation" }, { "paragraph_id": 87, "text": "An IO psychologist, whether an academic, consultant or an employee of an organization, is expected to maintain high ethical standards. SIOP encourages its members to follow the APA Ethics Code. With more organizations becoming global, it is important that when an IO psychologist works outside her or his home country, the psychologist is aware of rules, regulations, and cultures of the organizations and countries in which the psychologist works, while also adhering to the ethical standards set at home.", "title": "Ethical Principles for I-O Psychology" } ]
Industrial and organizational psychology "focuses the lens of psychological science on a key aspect of human life, namely, their work lives. In general, the goals of I-O psychology are to better understand and optimize the effectiveness, health, and well-being of both individuals and organizations." It is an applied discipline within psychology and is an international profession. I-O psychology is also known as occupational psychology in the United Kingdom, organisational psychology in Australia and New Zealand, and work and organizational (WO) psychology throughout Europe and Brazil. Industrial, work, and organizational (IWO) psychology is the broader, more global term for the science and profession. I-O psychologists are trained in the scientist–practitioner model. As an applied field, the discipline involves both research and practice and I-O psychologists apply psychological theories and principles to organizations and the individuals within them. They contribute to an organization's success by improving the job performance, wellbeing, motivation, job satisfaction and the health and safety of employees. An I-O psychologist conducts research on employee attitudes, behaviors, emotions, motivation, and stress. The field is concerned with how these things can be improved through recruitment processes, training programs, feedback, management systems and other interventions. I-O psychology research and practice also includes the work–nonwork interface such as selecting and transitioning into a new career, occupational burnout, unemployment, retirement, and work-family conflict and balance. I-O psychology is one of the 17 recognized professional specialties by the American Psychological Association (APA). In the United States the profession is represented by Division 14 of the APA and is formally known as the Society for Industrial and Organizational Psychology (SIOP). Similar I-O psychology societies can be found in many countries. In 2009 the Alliance for Organizational Psychology was formed and is a federation of Work, Industrial, & Organizational Psychology societies and "network partners" from around the world.
2002-02-25T15:51:15Z
2023-12-18T08:12:32Z
[ "Template:Use mdy dates", "Template:Cite journal", "Template:Refend", "Template:Short description", "Template:Multiple issues", "Template:Better source needed", "Template:Wikiquote", "Template:Anchor", "Template:Columns-list", "Template:Use American English", "Template:Citation needed", "Template:Main", "Template:Request quotation", "Template:Aspects of organizations", "Template:Cite news", "Template:Refbegin", "Template:Psychology", "Template:Sup", "Template:Rp", "Template:Blockquote", "Template:According to whom", "Template:Webarchive", "Template:Reflist", "Template:Doi", "Template:Psychology (sidebar)", "Template:Cite web", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Industrial_and_organizational_psychology
15,451
International Council of Unitarians and Universalists
The International Council of Unitarians and Universalists (ICUU) was an umbrella organization founded in 1995 comprising many Unitarian, Universalist, and Unitarian Universalist organizations. It was dissolved in 2021 along with the Unitarian Universalist Partner Church Council to make way for a new merged entity. Some groups represented only a few hundred people; while the largest, the Unitarian Universalist Association, had more than 160,000 members as of May 2011—including over 150,000 in the United States. The original initiative for its establishment was contained in a resolution of the General Assembly of Unitarian and Free Christian Churches (British Unitarians) in 1987. This led to the establishment of the Advocates for the Establishment of an International Organization of Unitarians (AEIOU), which worked towards creating the council. However, the General Assembly resolution provided no funding. The Unitarian Universalist Association (UUA) became particularly interested in the establishment of a council when it had to deal with an increasing number of applications for membership from congregations outside North America. It had already granted membership to congregations in Adelaide, Auckland, the Philippines and Pakistan, and congregations in Sydney, Russia and Spain had applied for membership. Rather than admit congregations from all over the world, the UUA hoped that they would join a world council instead. The UUA thus became willing to provide funding for the council's establishment. As a result, the council was finally established at a meeting in Essex, Massachusetts, United States on 23–26 March 1995. The Preamble to the Constitution of the International Council of Unitarians and Universalists reads: We, the member groups of the International Council of Unitarians and Universalists, affirming our belief in religious community based on: declare our purposes to be: Polish Unitarians have reported a need for a period of reorganization, and that at this time they are unable to maintain the level of activity needed to be full Council members, be it moved that membership of these groups be suspended. This action is taken with regret and the ICUU looks forward to welcoming Poland back into membership at the earliest possible date. Churches and religious associations which have expressed their will to become members of the Council may be admitted as "Provisional Members" for a period of time (generally two or four years), until the Council decides that they have shown their organizational stability, affinity with the ICUU principles and commitment to deserve becoming Full Members of the Council. Provisional Members are invited to Council meetings through a delegate but cannot vote. According to the Bylaws of the ICUU, Emerging Groups are "applicants that are deemed to be reasonable prospects for membership, but do not fulfil the conditions of either Provisional membership or Full Membership". These groups may be designated as Emerging Groups by the Executive Committee upon its sole discretion. Emerging Groups may be invited as observers to General Meetings. The current list of Emerging Groups after the last meeting of the Executive Committee (London, 22–25 November 2008) is as follows: Organizations with beliefs and purposes closely akin to those of ICUU but which by nature of their constitution are not eligible for full membership or which do not wish to become full members now or in the foreseeable future, may become Associates of the ICUU. The application must be approved by the ICUU Council Meeting.
[ { "paragraph_id": 0, "text": "The International Council of Unitarians and Universalists (ICUU) was an umbrella organization founded in 1995 comprising many Unitarian, Universalist, and Unitarian Universalist organizations. It was dissolved in 2021 along with the Unitarian Universalist Partner Church Council to make way for a new merged entity. Some groups represented only a few hundred people; while the largest, the Unitarian Universalist Association, had more than 160,000 members as of May 2011—including over 150,000 in the United States.", "title": "" }, { "paragraph_id": 1, "text": "The original initiative for its establishment was contained in a resolution of the General Assembly of Unitarian and Free Christian Churches (British Unitarians) in 1987. This led to the establishment of the Advocates for the Establishment of an International Organization of Unitarians (AEIOU), which worked towards creating the council. However, the General Assembly resolution provided no funding.", "title": "History" }, { "paragraph_id": 2, "text": "The Unitarian Universalist Association (UUA) became particularly interested in the establishment of a council when it had to deal with an increasing number of applications for membership from congregations outside North America. It had already granted membership to congregations in Adelaide, Auckland, the Philippines and Pakistan, and congregations in Sydney, Russia and Spain had applied for membership. Rather than admit congregations from all over the world, the UUA hoped that they would join a world council instead. The UUA thus became willing to provide funding for the council's establishment.", "title": "History" }, { "paragraph_id": 3, "text": "As a result, the council was finally established at a meeting in Essex, Massachusetts, United States on 23–26 March 1995.", "title": "History" }, { "paragraph_id": 4, "text": "The Preamble to the Constitution of the International Council of Unitarians and Universalists reads:", "title": "Principles and purposes" }, { "paragraph_id": 5, "text": "We, the member groups of the International Council of Unitarians and Universalists, affirming our belief in religious community based on:", "title": "Principles and purposes" }, { "paragraph_id": 6, "text": "declare our purposes to be:", "title": "Principles and purposes" }, { "paragraph_id": 7, "text": "Polish Unitarians have reported a need for a period of reorganization, and that at this time they are unable to maintain the level of activity needed to be full Council members, be it moved that membership of these groups be suspended. This action is taken with regret and the ICUU looks forward to welcoming Poland back into membership at the earliest possible date.", "title": "Members" }, { "paragraph_id": 8, "text": "Churches and religious associations which have expressed their will to become members of the Council may be admitted as \"Provisional Members\" for a period of time (generally two or four years), until the Council decides that they have shown their organizational stability, affinity with the ICUU principles and commitment to deserve becoming Full Members of the Council. Provisional Members are invited to Council meetings through a delegate but cannot vote.", "title": "Members" }, { "paragraph_id": 9, "text": "According to the Bylaws of the ICUU, Emerging Groups are \"applicants that are deemed to be reasonable prospects for membership, but do not fulfil the conditions of either Provisional membership or Full Membership\". These groups may be designated as Emerging Groups by the Executive Committee upon its sole discretion. Emerging Groups may be invited as observers to General Meetings.", "title": "Members" }, { "paragraph_id": 10, "text": "The current list of Emerging Groups after the last meeting of the Executive Committee (London, 22–25 November 2008) is as follows:", "title": "Members" }, { "paragraph_id": 11, "text": "Organizations with beliefs and purposes closely akin to those of ICUU but which by nature of their constitution are not eligible for full membership or which do not wish to become full members now or in the foreseeable future, may become Associates of the ICUU. The application must be approved by the ICUU Council Meeting.", "title": "Members" } ]
The International Council of Unitarians and Universalists (ICUU) was an umbrella organization founded in 1995 comprising many Unitarian, Universalist, and Unitarian Universalist organizations. It was dissolved in 2021 along with the Unitarian Universalist Partner Church Council to make way for a new merged entity. Some groups represented only a few hundred people; while the largest, the Unitarian Universalist Association, had more than 160,000 members as of May 2011—including over 150,000 in the United States.
2001-12-29T04:02:12Z
2023-11-02T21:25:51Z
[ "Template:UU topics", "Template:Short description", "Template:More citations needed", "Template:As of", "Template:Reflist", "Template:Cite news", "Template:Infobox religion", "Template:Portal", "Template:Cite web", "Template:Universalism footer", "Template:Authority control" ]
https://en.wikipedia.org/wiki/International_Council_of_Unitarians_and_Universalists
15,454
Itanium
Itanium (/aɪˈteɪniəm/ eye-TAY-nee-əm) is a discontinued family of 64-bit Intel microprocessors that implement the Intel Itanium architecture (formerly called IA-64). The Itanium architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel. Launched in June 2001, Intel initially marketed the processors for enterprise servers and high-performance computing systems. In the concept phase, engineers said "we could run circles around PowerPC, that we could kill the x86." Early predictions were that IA-64 would expand to the lower-end servers, supplanting Xeon, and eventually penetrate into the personal computers, eventually to supplant reduced instruction set computing (RISC) and complex instruction set computing (CISC) architectures for all general-purpose applications. When first released in 2001, Itanium's performance was disappointing compared to better-established RISC and CISC processors. Emulation to run existing x86 applications and operating systems was particularly poor. Itanium-based systems were produced by HP and its successor Hewlett Packard Enterprise (HPE) as the Integrity Servers line, and by several other manufacturers. In 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power ISA, and SPARC. In February 2017, Intel released the final generation, Kittson, to test customers, and in May began shipping in volume. It was used exclusively in mission-critical servers from HPE. In 2019, Intel announced that new orders for Itanium would be accepted until January 30, 2020, and shipments would cease by July 29, 2021. This took place on schedule. Itanium never sold well outside enterprise servers and high-performance computing systems, and the architecture was ultimately supplanted by competitor AMD’s x86-64 (also called AMD64) architecture. x86-64 is a compatible extension to the 32-bit x86 architecture, implemented by, for example, Intel's own Xeon line and AMD's Opteron line. Since 2009, most servers were being shipped with x86-64 processors, and they dominate the low cost desktop and laptop markets which were not initially targeted by Itanium. In an article titled "Intel's Itanium is finally dead: The Itanic sunken by the x86 juggernaut" Techspot declared "Itanium's promise ended up sunken by a lack of legacy 32-bit support and difficulties in working with the architecture for writing and maintaining software" while the dream of a single dominant ISA would be realized by the AMD64 extensions. In 1989, HP started to research an architecture that would exceed the expected limits of the reduced instruction set computer (RISC) architectures caused by the great increase in complexity needed for executing multiple instructions per cycle due to the need for dynamic dependency checking and precise exception handling. HP hired Bob Rau of Cydrome and Josh Fisher of Multiflow, the pioneers of very long instruction word (VLIW) computing. One VLIW instruction word can contain several independent instructions, which can be executed in parallel without having to evaluate them for independence. A compiler must attempt to find valid combinations of instructions that can be executed at the same time, effectively performing the instruction scheduling that conventional superscalar processors must do in hardware at runtime. HP researchers modified the classic VLIW into a new type of architecture, later named Explicitly Parallel Instruction Computing (EPIC), which differs by: having template bits which show which instructions are independent inside and between the bundles of three instructions, which enables the explicitly parallel execution of multiple bundles and increasing the processors' issue width without the need to recompile; by predication of instructions to reduce the need for branches; and by full interlocking to eliminate the delay slots. In EPIC the assignment of execution units to instructions and the timing of their issuing can be decided by hardware, unlike in the classic VLIW. HP intended to use these features in PA-WideWord, the planned successor to their PA-RISC ISA. EPIC was intended to provide the best balance between the efficient use of silicon area and electricity, and general-purpose flexibility. In 1993 HP held an internal competition to design the best (simulated) microarchitectures of a RISC and an EPIC type, led by Jerry Huck and Rajiv Gupta respectively. The EPIC team won, with over double the simulated performance of the RISC competitor. At the same time Intel was also looking for ways to make better ISAs. In 1989 Intel had launched the i860, which it marketed for workstations, servers, and iPSC and Paragon supercomputers. It differed from other RISCs by being able to switch between the normal single instruction per cycle mode, and a mode where pairs of instructions are explicitly defined as parallel so as to execute them in the same cycle without having to do dependency checking. Another distinguishing feature were the instructions for an exposed floating-point pipeline, that enabled the tripling of throughput compared to the conventional floating-point instructions. Both of these features were left largely unused because compilers didn't support them, a problem that later challenged Itanium too. Without them, i860's parallelism (and thus performance) was no better than other RISCs, so it failed in the market. Itanium would adopt a more flexible form of explicit parallelism than i860 had. In November 1993 HP approached Intel, seeking collaboration on an innovative future architecture. At the time Intel was looking to extend x86 to 64 bits in a processor codenamed P7, which they found challenging. Later Intel claimed that four different design teams had explored 64-bit extensions, but each of them concluded that it was not economically feasible. At the meeting with HP, Intel's engineers were impressed when Jerry Huck and Rajiv Gupta presented the PA-WideWord architecture they had designed to replace PA-RISC. "When we saw WideWord, we saw a lot of things we had only been looking at doing, already in their full glory", said Intel's John Crawford, who in 1994 became the chief architect of Merced, and who had earlier argued against extending the x86 with P7. HP's Gupta recalled: "I looked Albert Yu [Intel's general manager for microprocessors] in the eyes and showed him we could run circles around PowerPC, that we could kill PowerPC, that we could kill the x86." Soon Intel and HP started conducting in-depth technical discussions at a HP office, where each side had six engineers who exchanged and discussed both companies' confidential architectural research. They then decided to use not only PA-WideWord, but also the more experimental HP Labs PlayDoh as the source of their joint future architecture. Convinced of the superiority of the new project, in 1994 Intel canceled their existing plans for P7. In June 1994 Intel and HP announced their joint effort to make a new ISA that would adopt ideas of Wide Word and VLIW. Yu declared: "If I were competitors, I'd be really worried. If you think you have a future, you don't." On P7's future, Intel said the alliance would impact it, but "it is not clear" whether it would "fully encompass the new architecture". Later the same month, Intel said that some of the first features of the new architecture would start appearing on Intel chips as early as the P7, but the full version would appear sometime later. In August 1994 EE Times reported that Intel told investors that P7 was being re-evaluated and possibly canceled in favor of the HP processor. Intel immediately issued a clarification, saying that P7 is still being defined, and that HP may contribute to its architecture. Later it was confirmed that the P7 codename had indeed passed to the HP-Intel processor. By early 1996 Intel revealed its new codename, Merced. HP believed that it was no longer cost-effective for individual enterprise systems companies such as itself to develop proprietary microprocessors, so it partnered with Intel in 1994 to develop the IA-64 architecture, derived from EPIC. Intel was willing to undertake the very large development effort on IA-64 in the expectation that the resulting microprocessor would be used by the majority of enterprise systems manufacturers. HP and Intel initiated a large joint development effort with a goal of delivering the first product, Merced, in 1998. Merced was designed by a team of 500, which Intel later admitted was too inexperienced, with many recent college graduates. Crawford (Intel) was the chief architect, while Huck (HP) held the second position. Early in the development HP and Intel had a disagreement where Intel wanted more dedicated hardware for more floating-point instructions. HP prevailed upon the discovery of a floating-point hardware bug in Intel's Pentium. When Merced was floorplanned for the first time in mid-1996, it turned out to be far too large, "this was a lot worse than anything I'd seen before", said Crawford. The designers had to reduce the complexity (and thus performance) of subsystems, including the x86 unit and cutting the L2 cache to 96 KB. Eventually it was agreed that the size target could only be reached by using the 180 nm process instead of the intended 250 nm. Later problems emerged with attempts to speed up the critical paths without disturbing the other circuits' speed. Merced was taped out on 4 July 1999, and in August Intel produced the first complete test chip. The expectations for Merced waned over time as delays and performance deficiencies emerged, shifting the focus and onus for success onto the HP-led second Itanium design, codenamed McKinley. In July 1997 the switch to the 180 nm process delayed Merced into the second half of 1999. Shortly before the reveal of EPIC at the Microprocessor Forum in October 1997, an analyst of the Microprocessor Report said that Itanium would "not show the competitive performance until 2001. It will take the second version of the chip for the performance to get shown". At the Forum, Intel's Fred Pollack originated the "wait for McKinley" mantra when he said that it would double the Merced's performance and would "knock your socks off", while using the same 180 nm process as Merced. Pollack also said that Merced's x86 performance would be lower than the fastest x86 processors, and that x86 would "continue to grow at its historical rates". Intel said that IA-64 won't have much presence in the consumer market for 5 to 10 years. Later it was reported that HP's motivation when starting to design McKinley in 1996 was to have more control over the project so as to avoid the issues affecting Merced's performance and schedule. The design team finalized McKinley's project goals in 1997. In late May 1998 Merced was delayed to mid-2000, and by August 1998 analysts were questioning its commercial viability, given that McKinley would arrive shortly after with double the performance, as delays were causing Merced to turn into simply a development vehicle for the Itanium ecosystem. The "wait for McKinley" narrative was becoming prevalent. The same day it was reported that due to the delays, HP would extend its line of PA-RISC PA-8000 series processors from PA-8500 to as far as PA-8900. In October 1998 HP announced its plans for four more generations of PA-RISC processors, with PA-8900 set to reach 1.2 GHz in 2003. By March 1999 some analysts expected Merced to ship in volume only in 2001, but the volume was widely expected to be low as most customers would wait for McKinley. In May 1999, two months before Merced's tape-out, an analyst said that failure to tape-out before July would result in another delay. In July 1999, upon reports that the first silicon would be made in late August, analysts predicted a delay to late 2000, and came into agreement that Merced would be used chiefly for debugging and testing the IA-64 software. Linley Gwennap of MPR said of Merced that "at this point, everyone is expecting it's going to be late and slow, and the real advance is going to come from McKinley. What this does is puts a lot more pressure on McKinley and for that team to deliver". By then, Intel had revealed that Merced would be initially priced at $5000. In August 1999 HP advised some of their customers to skip Merced and wait for McKinley. By July 2000 HP told the press that the first Itanium systems would be for niche uses, and that "You're not going to put this stuff near your data center for several years."; HP expected its Itanium systems to outsell the PA-RISC systems only in 2005. The same July Intel told of another delay, due to a stepping change to fix bugs. Now only "pilot systems" would ship that year, while the general availability was pushed to the "first half of 2001". Server makers had largely forgone spending on the R&D for the Merced-based systems, instead using motherboards or whole servers of Intel's design. To foster a wide ecosystem, by mid-2000 Intel had provided 15,000 Itaniums in 5,000 systems to software developers and hardware designers. In March 2001 Intel said Itanium systems would begin shipping to customers in the second quarter, followed by a broader deployment in the second half of the year. By then even Intel publicly acknowledged that many customers would wait for McKinley. During development, Intel, HP, and industry analysts predicted that IA-64 would dominate first in 64-bit servers and workstations, then expand to the lower-end servers, supplanting Xeon, and finally penetrate into the personal computers, eventually to supplant RISC and complex instruction set computing (CISC) architectures for all general-purpose applications, though not replacing x86 "for the foreseeable future" according to Intel. In 1997-1998, Intel CEO Andy Grove predicted that Itanium would not come to the desktop computers for four of five years after launch, and said "I don't see Merced appearing on a mainstream desktop inside of a decade". In contrast, Itanium was expected to capture 70% of the 64-bit server market in 2002. Already in 1998 Itanium's focus on the high end of the computer market was criticized for making it vulnerable to challengers expanding from the lower-end market segments, but many people in the computer industry feared voicing doubts about Itanium in the fear of Intel's retaliation. Compaq and Silicon Graphics decided to abandon further development of the Alpha and MIPS architectures respectively in favor of migrating to IA-64. Several groups ported operating systems for the architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris, Tru64 UNIX, and Monterey/64. The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping. Intel announced the official name of the processor, Itanium, on October 4, 1999. Within hours, the name Itanic had been coined on a Usenet newsgroup, a reference to the RMS Titanic, the "unsinkable" ocean liner that sank on her maiden voyage in 1912. "Itanic" was then used often by The Register, and others, to imply that the multibillion-dollar investment in Itanium—and the early hype associated with it—would be followed by its relatively quick demise. After having sampled 40,000 chips to the partners, Intel launched Itanium on May 29, 2001, with first OEM systems from HP, IBM and Dell shipping to customers in June. By then Itanium's performance was not superior to competing RISC and CISC processors. Itanium competed at the low-end (primarily four-CPU and smaller systems) with servers based on x86 processors, and at the high-end with IBM POWER and Sun Microsystems SPARC processors. Intel repositioned Itanium to focus on the high-end business and HPC computing markets, attempting to duplicate the x86's successful "horizontal" market (i.e., single architecture, multiple systems vendors). The success of this initial processor version was limited to replacing the PA-RISC in HP systems, Alpha in Compaq systems and MIPS in SGI systems, though IBM also delivered a supercomputer based on this processor. POWER and SPARC remained strong, while the 32-bit x86 architecture continued to grow into the enterprise space, building on the economies of scale fueled by its enormous installed base. Only a few thousand systems using the original Merced Itanium processor were sold, due to relatively poor performance, high cost and limited software availability. Recognizing that the lack of software could be a serious problem for the future, Intel made thousands of these early systems available to independent software vendors (ISVs) to stimulate development. HP and Intel brought the next-generation Itanium 2 processor to the market a year later. Few of the microarchitectural features of Merced would be carried over to all the subsequent Itanium designs, including the 16+16 KB L1 cache size and the 6-wide (two-bundle) instruction decoding. The Itanium 2 processor was released in July 2002, and was marketed for enterprise servers rather than for the whole gamut of high-end computing. The first Itanium 2, code-named McKinley, was jointly developed by HP and Intel, led by the HP team at Fort Collins, Colorado, taping out in December 2000. It relieved many of the performance problems of the original Itanium processor, which were mostly caused by an inefficient memory subsystem by approximately halving the latency and doubling the fill bandwidth of each of the three levels of cache, while expanding the L2 cache from 96 to 256 KB. Floating-point data is excluded from the L1 cache, because the L2 cache's higher bandwidth is more beneficial to typical floating-point applications than low latency. The L3 die was now integrated on-chip, tripling in associativity and doubling in bus width. McKinley also greatly increases the number of possible instruction combinations in a VLIW-bundle and reaches 25% higher frequency, despite having only eight pipeline stages versus Merced's ten. McKinley contains 221 million transistors (of which 25 million are for logic and 181 million for L3 cache), measured 19.5 mm by 21.6 mm (421 mm) and was fabricated in a 180 nm, bulk CMOS process with six layers of aluminium metallization. In May 2003 it was disclosed that some McKinley processors can suffer from a critical-path erratum leading to a system's crashing. It can be avoided by lowering the processor frequency to 800 MHz. In 2003, AMD released the Opteron CPU, which implements its own 64-bit architecture called AMD64. The Opteron gained rapid acceptance in the enterprise server space because it provided an easy upgrade from x86. Under the influence of Microsoft, Intel responded by implementing AMD's x86-64 instruction set architecture instead of IA-64 in its Xeon microprocessors in 2004, resulting in a new industry-wide de facto standard. In 2003 Intel released a new Itanium 2 family member, codenamed Madison, initially with up to 1.5 GHz frequency and 6 MB of L3 cache. The Madison 9M chip released in November 2004 had 9 MB of L3 cache and frequency up to 1.6 GHz, reaching 1.67 GHz in July 2005. Both chips used a 130 nm process and were the basis of all new Itanium processors until Montecito was released in July 2006, specifically Deerfield being a low wattage Madison, and Fanwood being a version of Madison 9M for lower-end servers with one or two CPU sockets. In November 2005, the major Itanium server manufacturers joined with Intel and a number of software vendors to form the Itanium Solutions Alliance to promote the architecture and accelerate the software porting effort. The Alliance announced that its members would invest $10 billion in the Itanium Solutions Alliance by the end of the decade. In early 2003, due to the success of IBM's dual-core POWER4, Intel announced that the first 90 nm Itanium processor, codenamed Montecito, would be delayed to 2005 so as to change it into a dual-core, thus merging it with the Chivano project. In September 2004 Intel demonstrated a working Montecito system, and claimed that the inclusion of hyper-threading increases Montecito's performance by 10-20% and that its frequency could reach 2 GHz. After a delay to "mid-2006" and reduction of the frequency to 1.6 GHz, on July 18 Intel delivered Montecito (marketed as the Itanium 2 9000 series), a dual-core processor with a switch-on-event multithreading and split 256 KB + 1 MB L2 caches that roughly doubled the performance and decreased the energy consumption by about 20 percent. At 596 mm² die size and 1.72 billion transistors it was the largest microprocessor at the time. It was supposed to feature Foxton Technology, a very sophisticated frequency regulator, which failed to pass validation and was thus not enabled for customers. Intel released the Itanium 9100 series, codenamed Montvale, in November 2007, retiring the "Itanium 2" brand. Originally intended to use the 65 nm process, it was changed into a fix of Montecito, enabling the demand-based switching (like EIST) and up to 667 MT/s front-side bus, which were intended for Montecito, plus a core-level lockstep. Montecito and Montvale were the last Itanium processors in which design Hewlett-Packard's engineering team at Fort Collins had a key role, as the team was subsequently transferred to Intel's ownership. The original code name for the first Itanium with more than two cores was Tanglewood, but it was changed to Tukwila in late 2003 due to trademark issues. Intel discussed a "middle-of-the-decade Itanium" to succeed Montecito, achieving ten times the performance of Madison. It was being designed by the famed DEC Alpha team and was expected have eight new multithreading-focused cores. Intel claimed "a lot more than two" cores and more than seven times the performance of Madison. In early 2004 Intel told of "plans to achieve up to double the performance over the Intel Xeon processor family at platform cost parity by 2007". By early 2005 Tukwila was redefined, now having fewer cores but focusing on single-threaded performance and multiprocessor scalability. In March 2005, Intel disclosed some details of Tukwila, the next Itanium processor after Montvale, to be released in 2007. Tukwila would have four processor cores and would replace the Itanium bus with a new Common System Interface, which would also be used by a new Xeon processor. Tukwila was to have a "common platform architecture" with a Xeon codenamed Whitefield, which was canceled in October 2005, when Intel revised Tukwila's delivery date to late 2008. In May 2009, the schedule for Tukwila, was revised again, with the release to OEMs planned for the first quarter of 2010. The Itanium 9300 series processor, codenamed Tukwila, was released on February 8, 2010, with greater performance and memory capacity. The device uses a 65 nm process, includes two to four cores, up to 24 MB on-die caches, Hyper-Threading technology and integrated memory controllers. It implements double-device data correction, which helps to fix memory errors. Tukwila also implements Intel QuickPath Interconnect (QPI) to replace the Itanium bus-based architecture. It has a peak interprocessor bandwidth of 96 GB/s and a peak memory bandwidth of 34 GB/s. With QuickPath, the processor has integrated memory controllers and interfaces the memory directly, using QPI interfaces to directly connect to other processors and I/O hubs. QuickPath is also used on Intel x86-64 processors using the Nehalem microarchitecture, which possibly enabled Tukwila and Nehalem to use the same chipsets. Tukwila incorporates two memory controllers, each of which has two links to Scalable Memory Buffers, which in turn support multiple DDR3 DIMMs, much like the Nehalem-based Xeon processor code-named Beckton. During the 2012 Hewlett-Packard Co. v. Oracle Corp. support lawsuit, court documents unsealed by a Santa Clara County Court judge revealed that in 2008, Hewlett-Packard had paid Intel around $440 million to keep producing and updating Itanium microprocessors from 2009 to 2014. In 2010, the two companies signed another $250 million deal, which obliged Intel to continue making Itanium CPUs for HP's machines until 2017. Under the terms of the agreements, HP had to pay for chips it gets from Intel, while Intel launches Tukwila, Poulson, Kittson, and Kittson+ chips in a bid to gradually boost performance of the platform. Intel first mentioned Poulson on March 1, 2005, at the Spring IDF. In June 2007 Intel said that Poulson would use a 32 nm process technology, skipping the 45 nm process. This was necessary for catching up after Itanium's delays left it at 90 nm competing against 65 nm and 45 nm processors. At ISSCC 2011, Intel presented a paper called "A 32nm 3.1 Billion Transistor 12-Wide-Issue Itanium Processor for Mission Critical Servers." Analyst David Kanter speculated that Poulson would use a new microarchitecture, with a more advanced form of multithreading that uses up to two threads, to improve performance for single threaded and multithreaded workloads. Some information was also released at the Hot Chips conference. Information presented improvements in multithreading, resiliency improvements (Intel Instruction Replay RAS) and few new instructions (thread priority, integer instruction, cache prefetching, and data access hints). Poulson was released on November 8, 2012, as the Itanium 9500 series processor. It is the follow-on processor to Tukwila. It features eight cores and has a 12-wide issue architecture, multithreading enhancements, and new instructions to take advantage of parallelism, especially in virtualization. The Poulson L3 cache size is 32 MB and common for all cores, not divided like previously. L2 cache size is 6 MB, 512 I KB, 256 D KB per core. Die size is 544 mm², less than its predecessor Tukwila (698.75 mm²). Intel's Product Change Notification (PCN) 111456-01 lists four models of Itanium 9500 series CPU, which was later removed in a revised document. The parts were later listed in Intel's Material Declaration Data Sheets (MDDS) database. Intel later posted Itanium 9500 reference manual. The models are the following: Intel had committed to at least one more generation after Poulson, first mentioning Kittson on 14 June 2007. Kittson was supposed to be on a 22 nm process and use the same LGA2011 socket and platform as Xeons. On 31 January 2013 Intel issued an update to their plans for Kittson: it would have the same LGA1248 socket and 32 nm process as Poulson, thus effectively halting any further development of Itanium processors. In April 2015, Intel, although it had not yet confirmed formal specifications, did confirm that it continued to work on the project. Meanwhile, the aggressively multicore Xeon E7 platform displaced Itanium-based solutions in the Intel roadmap. Even Hewlett-Packard, the main proponent and customer for Itanium, began selling x86-based Superdome and NonStop servers, and started to treat the Itanium-based versions as legacy products. Intel officially launched the Itanium 9700 series processor family on May 11, 2017. Kittson has no microarchitecture improvements over Poulson; despite nominally having a different stepping, it is functionally identical with the 9500 series, even having exactly the same bugs, the only difference being the 133 MHz higher frequency of 9760 and 9750 over 9560 and 9550 respectively. Intel announced that the 9700 series would be the last Itanium chips produced. The models are: In comparison with its Xeon family of server processors, Itanium was never a high-volume product for Intel. Intel does not release production numbers, but one industry analyst estimated that the production rate was 200,000 processors per year in 2007. According to Gartner Inc., the total number of Itanium servers (not processors) sold by all vendors in 2007, was about 55,000 (It is unclear whether clustered servers counted as a single server or not.). This compares with 417,000 RISC servers (spread across all RISC vendors) and 8.4 million x86 servers. IDC reports that a total of 184,000 Itanium-based systems were sold from 2001 through 2007. For the combined POWER/SPARC/Itanium systems market, IDC reports that POWER captured 42% of revenue and SPARC captured 32%, while Itanium-based system revenue reached 26% in the second quarter of 2008. According to an IDC analyst, in 2007, HP accounted for perhaps 80% of Itanium systems revenue. According to Gartner, in 2008, HP accounted for 95% of Itanium sales. HP's Itanium system sales were at an annual rate of $4.4Bn at the end of 2008, and declined to $3.5Bn by the end of 2009, compared to a 35% decline in UNIX system revenue for Sun and an 11% drop for IBM, with an x86-64 server revenue increase of 14% during this period. In December 2012, IDC released a research report stating that Itanium server shipments would remain flat through 2016, with annual shipment of 26,000 systems (a decline of over 50% compared to shipments in 2008). By 2006, HP manufactured at least 80% of all Itanium systems, and sold 7,200 in the first quarter of 2006. The bulk of systems sold were enterprise servers and machines for large-scale technical computing, with an average selling price per system in excess of US$200,000. A typical system uses eight or more Itanium processors. By 2012, only a few manufacturers offered Itanium systems, including HP, Bull, NEC, Inspur and Huawei. In addition, Intel offered a chassis that could be used by system integrators to build Itanium systems. By 2015, only HP supplied Itanium-based systems. With HP split in late 2015, Itanium systems (branded as Integrity) are handled by Hewlett Packard Enterprise (HPE), with a major update in 2017 (Integrity i6, and HP-UX 11i v3 Update 16). HPE also supports a few other operating systems, including Windows up to Server 2008 R2, Linux, OpenVMS and NonStop. Itanium is not affected by Spectre and Meltdown. Prior to the 9300-series (Tukwila), chipsets were needed to connect to the main memory and I/O devices, as the front-side bus to the chipset was the sole connection out of the processor (except for TAP (JTAG) and SMBus for debugging and system configuration). Two generations of buses existed, the original Itanium processor system bus (a.k.a. Merced bus) had a 64 bit data width and 133 MHz clock with DDR (266 MT/s), being soon superseded by the 128-bit 200 MHz DDR (400 MT/s) Itanium 2 processor system bus (a.k.a. McKinley bus), which later reached 533 and 667 MT/s. Up to four CPUs per single bus could be used, but prior to the 9000-series the bus speeds of over 400 MT/s were limited to up to two processors per bus. As no Itanium chipset could connect to more than four sockets, high-end servers needed multiple interconnected chipsets. The "Tukwila" Itanium processor model had been designed to share a common chipset with the Intel Xeon processor EX (Intel's Xeon processor designed for four processor and larger servers). The goal was to streamline system development and reduce costs for server OEMs, many of which develop both Itanium- and Xeon-based servers. However, in 2013, this goal was pushed back to be "evaluated for future implementation opportunities". In the times before on-chip memory controllers and QPI, enterprise server manufacturers differentiated their systems by designing and developing chipsets that interface the processor to memory, interconnections, and peripheral controllers. "Enterprise server" referred to the then-lucrative market segment of high-end servers with high reliability, availability and serviceability and typically 16+ processor sockets, justifying their pricing by having a custom system-level architecture with their own chipsets at its heart, with capabilities far beyond what two-socket "commodity servers" could offer. Development of a chipset costs tens of millions of dollars and so represented a major commitment to the use of Itanium. Neither Intel nor IBM would develop Itanium 2 chipsets to support newer technologies such as DDR2 or PCI Express. Before "Tukwila" moved away from the FSB, chipsets supporting such technologies were manufactured by all Itanium server vendors, such as HP, Fujitsu, SGI, NEC, and Hitachi. The first generation of Itanium received no vendor-specific chipsets, only Intel's 460GX consisting of ten distinct chips. It supported up to four CPUs and 64 GB of memory at 4.2 GB/s, which is twice the system bus's bandwidth. Addresses and data were handled by two different chips. 460GX had an AGP X4 graphics bus, two 64-bit 66 MHz PCI buses and configurable 33 MHz dual 32-bit or single 64-bit PCI bus(es). There were many custom chipset designs for Itanium 2, but many smaller vendors chose to use Intel's E8870 chipset. It supports 128 GB of DDR SDRAM at 6.4 GB/s. It was originally designed for Rambus RDRAM serial memory, but when RDRAM failed to succeed Intel added four DDR SDRAM-to-RDRAM converter chips to the chipset. When Intel had previously made such a converter for Pentium III chipsets 820 and 840, it drastically cut performance. E8870 provides eight 133 MHz PCI-X buses (4.2 GB/s total because of bottlenecks) and a ICH4 hub with six USB 2.0 ports. Two E8870 can be linked together by two E8870SP Scalability Port Switches, each containing a 1MB (~200,000 cache lines) snoop filter, to create an 8-socket system with double the memory and PCI-X capacity, but still only one ICH4. Further expansion to 16 sockets was planned. In 2004 Intel revealed plans for its next Itanium chipset, codenamed Bayshore, to support PCI-e and DDR2 memory, but canceled it the same year. HP has designed four different chipsets for Itanium 2: zx1, sx1000, zx2 and sx2000. All support 4 sockets per chipset, but sx1000 and sx2000 support interconnection of up to 16 chipsets to create up to a 64 socket system. As it was developed in collaboration with Itanium 2's development, booting the first Itanium 2 in February 2001, zx1 became the first Itanium 2 chipset available and later in 2004 also the first to support 533 MT/s FSB. In its basic two-chip version it directly provides four channels of DDR-266 memory, giving 8.5 GB/s of bandwidth and 32 GB of capacity (though 12 DIMM slots). In versions with memory expander boards memory bandwidth reaches 12.8 GB/s, while the maximum capacity for the initial two-board 48 DIMM expanders was 96 GB, and the later single-board 32 DIMM expander up to 128 GB. The memory latency increases by 25 nanoseconds from 80 ns due to the expanders. Eight independent links went to the PCI-X and other peripheral devices (e.g. AGP in workstations), totaling 4 GB/s. HP's first high-end Itanium chipset was sx1000, launched in mid-2003 with the Integrity Superdome flagship server. It has two independent front-side buses, each bus supporting two sockets, giving 12.8 GB/s of combined bandwidth from the processors to the chipset. It has four links to data-only memory buffers and supports 64 GB of HP-designed 125 MHz memory at 16 GB/s. The above components form a system board called a cell. Two cells can be directly connected together to create an 8-socket glueless system. To connect four cells together, a pair of 8-ported crossbar switches is needed (adding 64 ns to inter-cell memory accesses), while four such pairs of crossbar switches are needed for the top-end system of 16 cells (64 sockets), giving 32 GB/s of bisection bandwidth. Cells maintain cache coherence through in-memory directories, which causes the minimum memory latency to be 241 ns. The latency to the most remote (NUMA) memory is 463 ns. The per-cell bandwidth to the I/O subsystems is 2 GB/s, despite the presence of 8 GB/s worth of PCI-X buses in each I/O subsystem. HP launched sx2000 in March 2006 to succeed sx1000. Its two FSBs operate at 533 MT/s. It supports up to 128 GB of memory at 17 GB/s. The memory is of HP's custom design, using the DDR2 protocol, but twice as tall as the standard modules and with redundant address and control signal contacts. For the inter-chipset communication, 25.5 GB/s is available on each sx2000 through its three serial links that can connect to a set of three independent crossbars, which connect to other cells or up to 3 other sets of 3 crossbars. The multi-cell configurations are the same as with sx1000, except the parallelism of the sets of crossbars has been increased from 2 to 3. The maximum configuration of 64 sockets has 72 GB/s of sustainable bisection bandwidth. The chipset's connection to its I/O module is now serial with an 8.5 GB/s peak and 5.5 GB/s sustained bandwidth, the I/O module having either 12 PCI-X buses at up to 266 MHz, or 6 PCI-X buses and 6 PCIe 1.1 ×8 slots. It is the last chipset to support HP's PA-RISC processors (PA-8900). HP launched the first zx2-based servers in September 2006. zx2 can operate the FSB at 667 MT/s with two CPUs or 533 MT/s with four CPUs. It connects to the DDR2 memory either directly, supporting 32 GB at up to 14.2 GB/s, or through expander boards, supporting up to 384 GB at 17 GB/s. The minimum open-page latency is 60 to 78 ns. 9.8 GB/s are available through eight independent links to the I/O adapters, which can include PCIe ×8 or 266 MHz PCI-X. In May 2003, IBM launched the XA-64 chipset for Itanium 2. It used many of the same technologies as the first two generations of XA-32 chipsets for Xeon, but by the time of the third gen XA-32 IBM had decided to discontinue its Itanium products. XA-64 supported 56 GB of DDR SDRAM in 28 slots at 6.4 GB/s, though due to bottlenecks only 3.2 GB/s could go to the CPU and other 2 GB/s to devices for a 5.2 GB/s total. The CPU's memory bottleneck was mitigated by an off-chip 64 MB DRAM L4 cache, which also worked as a snoop filter in multi-chipset systems. The combined bandwidth of the four PCI-X buses and other I/O is bottlenecked to 2 GB/s per chipset. Two or four chipsets can be connected to make an 8 or 16 socket system. SGI's Altix supercomputers and servers used the SHUB (Super-Hub) chipset, which supports two Itanium 2 sockets. The initial version used DDR memory through four buses for up to 12.8 GB/s bandwidth, and up to 32 GB of capacity across 16 slots. A 2.4 GB/s XIO channel connected to a module with up to six 64-bit 133 MHz PCI-X buses. SHUBs can be interconnected by the dual 6.4 GB/s NUMAlink4 link planes to create a 512-socket cache-coherent single-image system. A cache for the in-memory coherence directory saves memory bandwidth and reduces latency. The latency to the local memory is 132 ns, and each crossing of a NUMAlink4 router adds 50 ns. I/O modules with four 133 MHz PCI-X buses can connect directly to the NUMAlink4 network. SGI's second-generation SHUB 2.0 chipset supported up to 48 GB of DDR2 memory, 667 MT/s FSB, and could connect to I/O modules providing PCI Express. It supports only four local threads, so when having two dual-core CPUs per chipset, Hyper-Threading must be disabled. The Trillian Project was an effort by an industry consortium to port the Linux kernel to the Itanium processor. The project started in May 1999 with the goal of releasing the distribution in time for the initial release of Itanium, then scheduled for early 2000. By the end of 1999, the project included Caldera Systems, CERN, Cygnus Solutions, Hewlett-Packard, IBM, Intel, Red Hat, SGI, SuSE, TurboLinux and VA Linux Systems. The project released the resulting code in February 2000. The code then became part of the mainline Linux kernel more than a year before the release of the first Itanium processor. The Trillian project was able to do this for two reasons: After the successful completion of Project Trillian, the resulting Linux kernel was used by all of the manufacturers of Itanium systems (HP, IBM, Dell, SGI, Fujitsu, Unisys, Hitachi, and Groupe Bull.) With the notable exception of HP, Linux is either the primary OS or the only OS the manufacturer supports for Itanium. Ongoing free and open source software support for Linux on Itanium subsequently coalesced at Gelato. In 2005, Fedora Linux started adding support for Itanium and Novell added support for SUSE Linux. In 2007, CentOS added support for Itanium in a new release. In 2009, Red Hat dropped Itanium support in Enterprise Linux 6. Ubuntu 10.10 dropped support for Itanium. In 2021, Linus Torvalds marked the Itanium code as orphaned. Torvalds said: "HPE no longer accepts orders for new Itanium hardware, and Intel stopped accepting orders a year ago. While intel is still officially shipping chips until July 29, 2021, it's unlikely that any such orders actually exist. It's dead, Jim." Support for Itanium was removed in Linux 6.7. In 2001, Compaq announced that OpenVMS would be ported to the Itanium architecture. This led to the creation of the V8.x releases of OpenVMS, which support both Itanium-based HPE Integrity Servers and DEC Alpha hardware. Since the Itanium porting effort began, ownership of OpenVMS transferred from Compaq to HP in 2001, and then to VMS Software Inc. (VSI) in 2014. Noteworthy releases include: Support for Itanium has been dropped in the V9.x releases of OpenVMS, which run on x86-64 only. NonStop OS was ported from MIPS-based hardware to Itanium in 2005. NonStop OS was later ported to x86-64 in 2015. Sales of Itanium-based NonStop hardware ended in 2020, with support ending in 2025. GNU Compiler Collection deprecated support for IA-64 in GCC 10, after Intel announced the planned phase-out of this ISA. LLVM (Clang) dropped Itanium support in version 2.6. HP sells a virtualization technology for Itanium called Integrity Virtual Machines. Emulation is a technique that allows a computer to execute binary code that was compiled for a different type of computer. Before IBM's acquisition of QuickTransit in 2009, application binary software for IRIX/MIPS and Solaris/SPARC could run via type of emulation called "dynamic binary translation" on Linux/Itanium. Similarly, HP implemented a method to execute PA-RISC/HP-UX on the Itanium/HP-UX via emulation, to simplify migration of its PA-RISC customers to the radically different Itanium instruction set. Itanium processors can also run the mainframe environment GCOS from Groupe Bull and several x86 operating systems via instruction set simulators. Itanium was aimed at the enterprise server and high-performance computing (HPC) markets. Other enterprise- and HPC-focused processor lines include Oracle's and Fujitsu's SPARC processors and IBM's Power microprocessors. Measured by quantity sold, Itanium's most serious competition came from x86-64 processors including Intel's own Xeon line and AMD's Opteron line. Since 2009, most servers were being shipped with x86-64 processors. In 2005, Itanium systems accounted for about 14% of HPC systems revenue, but the percentage declined as the industry shifted to x86-64 clusters for this application. An October 2008 Gartner report on the Tukwila processor stated that "...the future roadmap for Itanium looks as strong as that of any RISC peer like Power or SPARC." An Itanium-based computer first appeared on the list of the TOP500 supercomputers in November 2001. The best position ever achieved by an Itanium 2 based system in the list was No. 2, achieved in June 2004, when Thunder (Lawrence Livermore National Laboratory) entered the list with an Rmax of 19.94 Teraflops. In November 2004, Columbia entered the list at No. 2 with 51.8 Teraflops, and there was at least one Itanium-based computer in the top 10 from then until June 2007. The peak number of Itanium-based machines on the list occurred in the November 2004 list, at 84 systems (16.8%); by June 2012, this had dropped to one system (0.2%), and no Itanium system remained on the list in November 2012. The Itanium processors show a progression in capability. Merced was a proof of concept. McKinley dramatically improved the memory hierarchy and allowed Itanium to become reasonably competitive. Madison, with the shift to a 130 nm process, allowed for enough cache space to overcome the major performance bottlenecks. Montecito, with a 90 nm process, allowed for a dual-core implementation and a major improvement in performance per watt. Montvale added three new features: core-level lockstep, demand-based switching and front-side bus frequency of up to 667 MHz. When first released in 2001, Itanium's performance was disappointing compared to better-established RISC and CISC processors. Emulation to run existing x86 applications and operating systems was particularly poor, with one benchmark in 2001 reporting that it was equivalent at best to a 100 MHz Pentium in this mode (1.1 GHz Pentiums were on the market at that time). Itanium failed to make significant inroads against IA-32 or RISC, and suffered further following the arrival of x86-64 systems which offered greater compatibility with older x86 applications. In a 2009 article on the history of the processor — "How the Itanium Killed the Computer Industry" — journalist John C. Dvorak reported "This continues to be one of the great fiascos of the last 50 years". Tech columnist Ashlee Vance commented that the delays and underperformance "turned the product into a joke in the chip industry". In an interview, Donald Knuth said "The Itanium approach...was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write." Both Red Hat and Microsoft announced plans to drop Itanium support in their operating systems due to lack of market interest; however, other Linux distributions such as Gentoo and Debian remain available for Itanium. On March 22, 2011, Oracle Corporation announced that it would no longer develop new products for HP-UX on Itanium, although it would continue to provide support for existing products. Following this announcement, HP sued Oracle for breach of contract, arguing that Oracle had violated conditions imposed during settlement over Oracle's hiring of former HP CEO Mark Hurd as its co-CEO, requiring the vendor to support Itanium on its software "until such time as HP discontinues the sales of its Itanium-based servers", and that the breach had harmed its business. In 2012, a court ruled in favor of HP, and ordered Oracle to resume its support for Itanium. In June 2016, Hewlett Packard Enterprise (the corporate successor to HP's server business) was awarded $3 billion in damages from the lawsuit. Oracle unsuccessfully appealed the decision to the California Court of Appeal in 2021. A former Intel official reported that the Itanium business had become profitable for Intel in late 2009. By 2009, the chip was almost entirely deployed on servers made by HP, which had over 95% of the Itanium server market share, making the main operating system for Itanium HP-UX. On March 22, 2011, Intel reaffirmed its commitment to Itanium with multiple generations of chips in development and on schedule. Although Itanium did attain limited success in the niche market of high-end computing, Intel had originally hoped it would find broader acceptance as a replacement for the original x86 architecture. AMD chose a different direction, designing the less radical x86-64, a 64-bit extension to the existing x86 architecture, which Microsoft then supported, forcing Intel to introduce the same extensions in its own x86-based processors. These designs can run existing 32-bit applications at native hardware speed, while offering support for 64-bit memory addressing and other enhancements to new applications. This architecture has now become the predominant 64-bit architecture in the desktop and portable market. Although some Itanium-based workstations were initially introduced by companies such as SGI, they are no longer available. 1989 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2009 2010 2011 2012 2013 2014 2017 2019 2020 2021
[ { "paragraph_id": 0, "text": "Itanium (/aɪˈteɪniəm/ eye-TAY-nee-əm) is a discontinued family of 64-bit Intel microprocessors that implement the Intel Itanium architecture (formerly called IA-64). The Itanium architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel. Launched in June 2001, Intel initially marketed the processors for enterprise servers and high-performance computing systems. In the concept phase, engineers said \"we could run circles around PowerPC, that we could kill the x86.\" Early predictions were that IA-64 would expand to the lower-end servers, supplanting Xeon, and eventually penetrate into the personal computers, eventually to supplant reduced instruction set computing (RISC) and complex instruction set computing (CISC) architectures for all general-purpose applications.", "title": "" }, { "paragraph_id": 1, "text": "When first released in 2001, Itanium's performance was disappointing compared to better-established RISC and CISC processors. Emulation to run existing x86 applications and operating systems was particularly poor. Itanium-based systems were produced by HP and its successor Hewlett Packard Enterprise (HPE) as the Integrity Servers line, and by several other manufacturers. In 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power ISA, and SPARC.", "title": "" }, { "paragraph_id": 2, "text": "In February 2017, Intel released the final generation, Kittson, to test customers, and in May began shipping in volume. It was used exclusively in mission-critical servers from HPE.", "title": "" }, { "paragraph_id": 3, "text": "In 2019, Intel announced that new orders for Itanium would be accepted until January 30, 2020, and shipments would cease by July 29, 2021. This took place on schedule.", "title": "" }, { "paragraph_id": 4, "text": "Itanium never sold well outside enterprise servers and high-performance computing systems, and the architecture was ultimately supplanted by competitor AMD’s x86-64 (also called AMD64) architecture. x86-64 is a compatible extension to the 32-bit x86 architecture, implemented by, for example, Intel's own Xeon line and AMD's Opteron line. Since 2009, most servers were being shipped with x86-64 processors, and they dominate the low cost desktop and laptop markets which were not initially targeted by Itanium. In an article titled \"Intel's Itanium is finally dead: The Itanic sunken by the x86 juggernaut\" Techspot declared \"Itanium's promise ended up sunken by a lack of legacy 32-bit support and difficulties in working with the architecture for writing and maintaining software\" while the dream of a single dominant ISA would be realized by the AMD64 extensions.", "title": "" }, { "paragraph_id": 5, "text": "In 1989, HP started to research an architecture that would exceed the expected limits of the reduced instruction set computer (RISC) architectures caused by the great increase in complexity needed for executing multiple instructions per cycle due to the need for dynamic dependency checking and precise exception handling. HP hired Bob Rau of Cydrome and Josh Fisher of Multiflow, the pioneers of very long instruction word (VLIW) computing. One VLIW instruction word can contain several independent instructions, which can be executed in parallel without having to evaluate them for independence. A compiler must attempt to find valid combinations of instructions that can be executed at the same time, effectively performing the instruction scheduling that conventional superscalar processors must do in hardware at runtime.", "title": "History" }, { "paragraph_id": 6, "text": "HP researchers modified the classic VLIW into a new type of architecture, later named Explicitly Parallel Instruction Computing (EPIC), which differs by: having template bits which show which instructions are independent inside and between the bundles of three instructions, which enables the explicitly parallel execution of multiple bundles and increasing the processors' issue width without the need to recompile; by predication of instructions to reduce the need for branches; and by full interlocking to eliminate the delay slots. In EPIC the assignment of execution units to instructions and the timing of their issuing can be decided by hardware, unlike in the classic VLIW. HP intended to use these features in PA-WideWord, the planned successor to their PA-RISC ISA. EPIC was intended to provide the best balance between the efficient use of silicon area and electricity, and general-purpose flexibility. In 1993 HP held an internal competition to design the best (simulated) microarchitectures of a RISC and an EPIC type, led by Jerry Huck and Rajiv Gupta respectively. The EPIC team won, with over double the simulated performance of the RISC competitor.", "title": "History" }, { "paragraph_id": 7, "text": "At the same time Intel was also looking for ways to make better ISAs. In 1989 Intel had launched the i860, which it marketed for workstations, servers, and iPSC and Paragon supercomputers. It differed from other RISCs by being able to switch between the normal single instruction per cycle mode, and a mode where pairs of instructions are explicitly defined as parallel so as to execute them in the same cycle without having to do dependency checking. Another distinguishing feature were the instructions for an exposed floating-point pipeline, that enabled the tripling of throughput compared to the conventional floating-point instructions. Both of these features were left largely unused because compilers didn't support them, a problem that later challenged Itanium too. Without them, i860's parallelism (and thus performance) was no better than other RISCs, so it failed in the market. Itanium would adopt a more flexible form of explicit parallelism than i860 had.", "title": "History" }, { "paragraph_id": 8, "text": "In November 1993 HP approached Intel, seeking collaboration on an innovative future architecture. At the time Intel was looking to extend x86 to 64 bits in a processor codenamed P7, which they found challenging. Later Intel claimed that four different design teams had explored 64-bit extensions, but each of them concluded that it was not economically feasible. At the meeting with HP, Intel's engineers were impressed when Jerry Huck and Rajiv Gupta presented the PA-WideWord architecture they had designed to replace PA-RISC. \"When we saw WideWord, we saw a lot of things we had only been looking at doing, already in their full glory\", said Intel's John Crawford, who in 1994 became the chief architect of Merced, and who had earlier argued against extending the x86 with P7. HP's Gupta recalled: \"I looked Albert Yu [Intel's general manager for microprocessors] in the eyes and showed him we could run circles around PowerPC, that we could kill PowerPC, that we could kill the x86.\" Soon Intel and HP started conducting in-depth technical discussions at a HP office, where each side had six engineers who exchanged and discussed both companies' confidential architectural research. They then decided to use not only PA-WideWord, but also the more experimental HP Labs PlayDoh as the source of their joint future architecture. Convinced of the superiority of the new project, in 1994 Intel canceled their existing plans for P7.", "title": "History" }, { "paragraph_id": 9, "text": "In June 1994 Intel and HP announced their joint effort to make a new ISA that would adopt ideas of Wide Word and VLIW. Yu declared: \"If I were competitors, I'd be really worried. If you think you have a future, you don't.\" On P7's future, Intel said the alliance would impact it, but \"it is not clear\" whether it would \"fully encompass the new architecture\". Later the same month, Intel said that some of the first features of the new architecture would start appearing on Intel chips as early as the P7, but the full version would appear sometime later. In August 1994 EE Times reported that Intel told investors that P7 was being re-evaluated and possibly canceled in favor of the HP processor. Intel immediately issued a clarification, saying that P7 is still being defined, and that HP may contribute to its architecture. Later it was confirmed that the P7 codename had indeed passed to the HP-Intel processor. By early 1996 Intel revealed its new codename, Merced.", "title": "History" }, { "paragraph_id": 10, "text": "HP believed that it was no longer cost-effective for individual enterprise systems companies such as itself to develop proprietary microprocessors, so it partnered with Intel in 1994 to develop the IA-64 architecture, derived from EPIC. Intel was willing to undertake the very large development effort on IA-64 in the expectation that the resulting microprocessor would be used by the majority of enterprise systems manufacturers. HP and Intel initiated a large joint development effort with a goal of delivering the first product, Merced, in 1998.", "title": "History" }, { "paragraph_id": 11, "text": "Merced was designed by a team of 500, which Intel later admitted was too inexperienced, with many recent college graduates. Crawford (Intel) was the chief architect, while Huck (HP) held the second position. Early in the development HP and Intel had a disagreement where Intel wanted more dedicated hardware for more floating-point instructions. HP prevailed upon the discovery of a floating-point hardware bug in Intel's Pentium. When Merced was floorplanned for the first time in mid-1996, it turned out to be far too large, \"this was a lot worse than anything I'd seen before\", said Crawford. The designers had to reduce the complexity (and thus performance) of subsystems, including the x86 unit and cutting the L2 cache to 96 KB. Eventually it was agreed that the size target could only be reached by using the 180 nm process instead of the intended 250 nm. Later problems emerged with attempts to speed up the critical paths without disturbing the other circuits' speed. Merced was taped out on 4 July 1999, and in August Intel produced the first complete test chip.", "title": "History" }, { "paragraph_id": 12, "text": "The expectations for Merced waned over time as delays and performance deficiencies emerged, shifting the focus and onus for success onto the HP-led second Itanium design, codenamed McKinley. In July 1997 the switch to the 180 nm process delayed Merced into the second half of 1999. Shortly before the reveal of EPIC at the Microprocessor Forum in October 1997, an analyst of the Microprocessor Report said that Itanium would \"not show the competitive performance until 2001. It will take the second version of the chip for the performance to get shown\". At the Forum, Intel's Fred Pollack originated the \"wait for McKinley\" mantra when he said that it would double the Merced's performance and would \"knock your socks off\", while using the same 180 nm process as Merced. Pollack also said that Merced's x86 performance would be lower than the fastest x86 processors, and that x86 would \"continue to grow at its historical rates\". Intel said that IA-64 won't have much presence in the consumer market for 5 to 10 years.", "title": "History" }, { "paragraph_id": 13, "text": "Later it was reported that HP's motivation when starting to design McKinley in 1996 was to have more control over the project so as to avoid the issues affecting Merced's performance and schedule. The design team finalized McKinley's project goals in 1997. In late May 1998 Merced was delayed to mid-2000, and by August 1998 analysts were questioning its commercial viability, given that McKinley would arrive shortly after with double the performance, as delays were causing Merced to turn into simply a development vehicle for the Itanium ecosystem. The \"wait for McKinley\" narrative was becoming prevalent. The same day it was reported that due to the delays, HP would extend its line of PA-RISC PA-8000 series processors from PA-8500 to as far as PA-8900. In October 1998 HP announced its plans for four more generations of PA-RISC processors, with PA-8900 set to reach 1.2 GHz in 2003.", "title": "History" }, { "paragraph_id": 14, "text": "By March 1999 some analysts expected Merced to ship in volume only in 2001, but the volume was widely expected to be low as most customers would wait for McKinley. In May 1999, two months before Merced's tape-out, an analyst said that failure to tape-out before July would result in another delay. In July 1999, upon reports that the first silicon would be made in late August, analysts predicted a delay to late 2000, and came into agreement that Merced would be used chiefly for debugging and testing the IA-64 software. Linley Gwennap of MPR said of Merced that \"at this point, everyone is expecting it's going to be late and slow, and the real advance is going to come from McKinley. What this does is puts a lot more pressure on McKinley and for that team to deliver\". By then, Intel had revealed that Merced would be initially priced at $5000. In August 1999 HP advised some of their customers to skip Merced and wait for McKinley. By July 2000 HP told the press that the first Itanium systems would be for niche uses, and that \"You're not going to put this stuff near your data center for several years.\"; HP expected its Itanium systems to outsell the PA-RISC systems only in 2005. The same July Intel told of another delay, due to a stepping change to fix bugs. Now only \"pilot systems\" would ship that year, while the general availability was pushed to the \"first half of 2001\". Server makers had largely forgone spending on the R&D for the Merced-based systems, instead using motherboards or whole servers of Intel's design. To foster a wide ecosystem, by mid-2000 Intel had provided 15,000 Itaniums in 5,000 systems to software developers and hardware designers. In March 2001 Intel said Itanium systems would begin shipping to customers in the second quarter, followed by a broader deployment in the second half of the year. By then even Intel publicly acknowledged that many customers would wait for McKinley.", "title": "History" }, { "paragraph_id": 15, "text": "During development, Intel, HP, and industry analysts predicted that IA-64 would dominate first in 64-bit servers and workstations, then expand to the lower-end servers, supplanting Xeon, and finally penetrate into the personal computers, eventually to supplant RISC and complex instruction set computing (CISC) architectures for all general-purpose applications, though not replacing x86 \"for the foreseeable future\" according to Intel. In 1997-1998, Intel CEO Andy Grove predicted that Itanium would not come to the desktop computers for four of five years after launch, and said \"I don't see Merced appearing on a mainstream desktop inside of a decade\". In contrast, Itanium was expected to capture 70% of the 64-bit server market in 2002. Already in 1998 Itanium's focus on the high end of the computer market was criticized for making it vulnerable to challengers expanding from the lower-end market segments, but many people in the computer industry feared voicing doubts about Itanium in the fear of Intel's retaliation. Compaq and Silicon Graphics decided to abandon further development of the Alpha and MIPS architectures respectively in favor of migrating to IA-64.", "title": "History" }, { "paragraph_id": 16, "text": "Several groups ported operating systems for the architecture, including Microsoft Windows, OpenVMS, Linux, HP-UX, Solaris, Tru64 UNIX, and Monterey/64. The latter three were canceled before reaching the market. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, and the delivery timeframe of Merced began slipping.", "title": "History" }, { "paragraph_id": 17, "text": "Intel announced the official name of the processor, Itanium, on October 4, 1999. Within hours, the name Itanic had been coined on a Usenet newsgroup, a reference to the RMS Titanic, the \"unsinkable\" ocean liner that sank on her maiden voyage in 1912. \"Itanic\" was then used often by The Register, and others, to imply that the multibillion-dollar investment in Itanium—and the early hype associated with it—would be followed by its relatively quick demise.", "title": "History" }, { "paragraph_id": 18, "text": "After having sampled 40,000 chips to the partners, Intel launched Itanium on May 29, 2001, with first OEM systems from HP, IBM and Dell shipping to customers in June. By then Itanium's performance was not superior to competing RISC and CISC processors. Itanium competed at the low-end (primarily four-CPU and smaller systems) with servers based on x86 processors, and at the high-end with IBM POWER and Sun Microsystems SPARC processors. Intel repositioned Itanium to focus on the high-end business and HPC computing markets, attempting to duplicate the x86's successful \"horizontal\" market (i.e., single architecture, multiple systems vendors). The success of this initial processor version was limited to replacing the PA-RISC in HP systems, Alpha in Compaq systems and MIPS in SGI systems, though IBM also delivered a supercomputer based on this processor. POWER and SPARC remained strong, while the 32-bit x86 architecture continued to grow into the enterprise space, building on the economies of scale fueled by its enormous installed base.", "title": "History" }, { "paragraph_id": 19, "text": "Only a few thousand systems using the original Merced Itanium processor were sold, due to relatively poor performance, high cost and limited software availability. Recognizing that the lack of software could be a serious problem for the future, Intel made thousands of these early systems available to independent software vendors (ISVs) to stimulate development. HP and Intel brought the next-generation Itanium 2 processor to the market a year later. Few of the microarchitectural features of Merced would be carried over to all the subsequent Itanium designs, including the 16+16 KB L1 cache size and the 6-wide (two-bundle) instruction decoding.", "title": "History" }, { "paragraph_id": 20, "text": "The Itanium 2 processor was released in July 2002, and was marketed for enterprise servers rather than for the whole gamut of high-end computing. The first Itanium 2, code-named McKinley, was jointly developed by HP and Intel, led by the HP team at Fort Collins, Colorado, taping out in December 2000. It relieved many of the performance problems of the original Itanium processor, which were mostly caused by an inefficient memory subsystem by approximately halving the latency and doubling the fill bandwidth of each of the three levels of cache, while expanding the L2 cache from 96 to 256 KB. Floating-point data is excluded from the L1 cache, because the L2 cache's higher bandwidth is more beneficial to typical floating-point applications than low latency. The L3 die was now integrated on-chip, tripling in associativity and doubling in bus width. McKinley also greatly increases the number of possible instruction combinations in a VLIW-bundle and reaches 25% higher frequency, despite having only eight pipeline stages versus Merced's ten.", "title": "History" }, { "paragraph_id": 21, "text": "McKinley contains 221 million transistors (of which 25 million are for logic and 181 million for L3 cache), measured 19.5 mm by 21.6 mm (421 mm) and was fabricated in a 180 nm, bulk CMOS process with six layers of aluminium metallization. In May 2003 it was disclosed that some McKinley processors can suffer from a critical-path erratum leading to a system's crashing. It can be avoided by lowering the processor frequency to 800 MHz.", "title": "History" }, { "paragraph_id": 22, "text": "In 2003, AMD released the Opteron CPU, which implements its own 64-bit architecture called AMD64. The Opteron gained rapid acceptance in the enterprise server space because it provided an easy upgrade from x86. Under the influence of Microsoft, Intel responded by implementing AMD's x86-64 instruction set architecture instead of IA-64 in its Xeon microprocessors in 2004, resulting in a new industry-wide de facto standard.", "title": "History" }, { "paragraph_id": 23, "text": "In 2003 Intel released a new Itanium 2 family member, codenamed Madison, initially with up to 1.5 GHz frequency and 6 MB of L3 cache. The Madison 9M chip released in November 2004 had 9 MB of L3 cache and frequency up to 1.6 GHz, reaching 1.67 GHz in July 2005. Both chips used a 130 nm process and were the basis of all new Itanium processors until Montecito was released in July 2006, specifically Deerfield being a low wattage Madison, and Fanwood being a version of Madison 9M for lower-end servers with one or two CPU sockets.", "title": "History" }, { "paragraph_id": 24, "text": "In November 2005, the major Itanium server manufacturers joined with Intel and a number of software vendors to form the Itanium Solutions Alliance to promote the architecture and accelerate the software porting effort. The Alliance announced that its members would invest $10 billion in the Itanium Solutions Alliance by the end of the decade.", "title": "History" }, { "paragraph_id": 25, "text": "In early 2003, due to the success of IBM's dual-core POWER4, Intel announced that the first 90 nm Itanium processor, codenamed Montecito, would be delayed to 2005 so as to change it into a dual-core, thus merging it with the Chivano project. In September 2004 Intel demonstrated a working Montecito system, and claimed that the inclusion of hyper-threading increases Montecito's performance by 10-20% and that its frequency could reach 2 GHz. After a delay to \"mid-2006\" and reduction of the frequency to 1.6 GHz, on July 18 Intel delivered Montecito (marketed as the Itanium 2 9000 series), a dual-core processor with a switch-on-event multithreading and split 256 KB + 1 MB L2 caches that roughly doubled the performance and decreased the energy consumption by about 20 percent. At 596 mm² die size and 1.72 billion transistors it was the largest microprocessor at the time. It was supposed to feature Foxton Technology, a very sophisticated frequency regulator, which failed to pass validation and was thus not enabled for customers.", "title": "History" }, { "paragraph_id": 26, "text": "Intel released the Itanium 9100 series, codenamed Montvale, in November 2007, retiring the \"Itanium 2\" brand. Originally intended to use the 65 nm process, it was changed into a fix of Montecito, enabling the demand-based switching (like EIST) and up to 667 MT/s front-side bus, which were intended for Montecito, plus a core-level lockstep. Montecito and Montvale were the last Itanium processors in which design Hewlett-Packard's engineering team at Fort Collins had a key role, as the team was subsequently transferred to Intel's ownership.", "title": "History" }, { "paragraph_id": 27, "text": "The original code name for the first Itanium with more than two cores was Tanglewood, but it was changed to Tukwila in late 2003 due to trademark issues. Intel discussed a \"middle-of-the-decade Itanium\" to succeed Montecito, achieving ten times the performance of Madison. It was being designed by the famed DEC Alpha team and was expected have eight new multithreading-focused cores. Intel claimed \"a lot more than two\" cores and more than seven times the performance of Madison. In early 2004 Intel told of \"plans to achieve up to double the performance over the Intel Xeon processor family at platform cost parity by 2007\". By early 2005 Tukwila was redefined, now having fewer cores but focusing on single-threaded performance and multiprocessor scalability.", "title": "History" }, { "paragraph_id": 28, "text": "In March 2005, Intel disclosed some details of Tukwila, the next Itanium processor after Montvale, to be released in 2007. Tukwila would have four processor cores and would replace the Itanium bus with a new Common System Interface, which would also be used by a new Xeon processor. Tukwila was to have a \"common platform architecture\" with a Xeon codenamed Whitefield, which was canceled in October 2005, when Intel revised Tukwila's delivery date to late 2008. In May 2009, the schedule for Tukwila, was revised again, with the release to OEMs planned for the first quarter of 2010. The Itanium 9300 series processor, codenamed Tukwila, was released on February 8, 2010, with greater performance and memory capacity.", "title": "History" }, { "paragraph_id": 29, "text": "The device uses a 65 nm process, includes two to four cores, up to 24 MB on-die caches, Hyper-Threading technology and integrated memory controllers. It implements double-device data correction, which helps to fix memory errors. Tukwila also implements Intel QuickPath Interconnect (QPI) to replace the Itanium bus-based architecture. It has a peak interprocessor bandwidth of 96 GB/s and a peak memory bandwidth of 34 GB/s. With QuickPath, the processor has integrated memory controllers and interfaces the memory directly, using QPI interfaces to directly connect to other processors and I/O hubs. QuickPath is also used on Intel x86-64 processors using the Nehalem microarchitecture, which possibly enabled Tukwila and Nehalem to use the same chipsets. Tukwila incorporates two memory controllers, each of which has two links to Scalable Memory Buffers, which in turn support multiple DDR3 DIMMs, much like the Nehalem-based Xeon processor code-named Beckton.", "title": "History" }, { "paragraph_id": 30, "text": "During the 2012 Hewlett-Packard Co. v. Oracle Corp. support lawsuit, court documents unsealed by a Santa Clara County Court judge revealed that in 2008, Hewlett-Packard had paid Intel around $440 million to keep producing and updating Itanium microprocessors from 2009 to 2014. In 2010, the two companies signed another $250 million deal, which obliged Intel to continue making Itanium CPUs for HP's machines until 2017. Under the terms of the agreements, HP had to pay for chips it gets from Intel, while Intel launches Tukwila, Poulson, Kittson, and Kittson+ chips in a bid to gradually boost performance of the platform.", "title": "History" }, { "paragraph_id": 31, "text": "Intel first mentioned Poulson on March 1, 2005, at the Spring IDF. In June 2007 Intel said that Poulson would use a 32 nm process technology, skipping the 45 nm process. This was necessary for catching up after Itanium's delays left it at 90 nm competing against 65 nm and 45 nm processors.", "title": "History" }, { "paragraph_id": 32, "text": "At ISSCC 2011, Intel presented a paper called \"A 32nm 3.1 Billion Transistor 12-Wide-Issue Itanium Processor for Mission Critical Servers.\" Analyst David Kanter speculated that Poulson would use a new microarchitecture, with a more advanced form of multithreading that uses up to two threads, to improve performance for single threaded and multithreaded workloads. Some information was also released at the Hot Chips conference.", "title": "History" }, { "paragraph_id": 33, "text": "Information presented improvements in multithreading, resiliency improvements (Intel Instruction Replay RAS) and few new instructions (thread priority, integer instruction, cache prefetching, and data access hints).", "title": "History" }, { "paragraph_id": 34, "text": "Poulson was released on November 8, 2012, as the Itanium 9500 series processor. It is the follow-on processor to Tukwila. It features eight cores and has a 12-wide issue architecture, multithreading enhancements, and new instructions to take advantage of parallelism, especially in virtualization. The Poulson L3 cache size is 32 MB and common for all cores, not divided like previously. L2 cache size is 6 MB, 512 I KB, 256 D KB per core. Die size is 544 mm², less than its predecessor Tukwila (698.75 mm²).", "title": "History" }, { "paragraph_id": 35, "text": "Intel's Product Change Notification (PCN) 111456-01 lists four models of Itanium 9500 series CPU, which was later removed in a revised document. The parts were later listed in Intel's Material Declaration Data Sheets (MDDS) database. Intel later posted Itanium 9500 reference manual.", "title": "History" }, { "paragraph_id": 36, "text": "The models are the following:", "title": "History" }, { "paragraph_id": 37, "text": "Intel had committed to at least one more generation after Poulson, first mentioning Kittson on 14 June 2007. Kittson was supposed to be on a 22 nm process and use the same LGA2011 socket and platform as Xeons. On 31 January 2013 Intel issued an update to their plans for Kittson: it would have the same LGA1248 socket and 32 nm process as Poulson, thus effectively halting any further development of Itanium processors.", "title": "History" }, { "paragraph_id": 38, "text": "In April 2015, Intel, although it had not yet confirmed formal specifications, did confirm that it continued to work on the project. Meanwhile, the aggressively multicore Xeon E7 platform displaced Itanium-based solutions in the Intel roadmap. Even Hewlett-Packard, the main proponent and customer for Itanium, began selling x86-based Superdome and NonStop servers, and started to treat the Itanium-based versions as legacy products.", "title": "History" }, { "paragraph_id": 39, "text": "Intel officially launched the Itanium 9700 series processor family on May 11, 2017. Kittson has no microarchitecture improvements over Poulson; despite nominally having a different stepping, it is functionally identical with the 9500 series, even having exactly the same bugs, the only difference being the 133 MHz higher frequency of 9760 and 9750 over 9560 and 9550 respectively.", "title": "History" }, { "paragraph_id": 40, "text": "Intel announced that the 9700 series would be the last Itanium chips produced.", "title": "History" }, { "paragraph_id": 41, "text": "The models are:", "title": "History" }, { "paragraph_id": 42, "text": "In comparison with its Xeon family of server processors, Itanium was never a high-volume product for Intel. Intel does not release production numbers, but one industry analyst estimated that the production rate was 200,000 processors per year in 2007.", "title": "Market share" }, { "paragraph_id": 43, "text": "According to Gartner Inc., the total number of Itanium servers (not processors) sold by all vendors in 2007, was about 55,000 (It is unclear whether clustered servers counted as a single server or not.). This compares with 417,000 RISC servers (spread across all RISC vendors) and 8.4 million x86 servers. IDC reports that a total of 184,000 Itanium-based systems were sold from 2001 through 2007. For the combined POWER/SPARC/Itanium systems market, IDC reports that POWER captured 42% of revenue and SPARC captured 32%, while Itanium-based system revenue reached 26% in the second quarter of 2008. According to an IDC analyst, in 2007, HP accounted for perhaps 80% of Itanium systems revenue. According to Gartner, in 2008, HP accounted for 95% of Itanium sales. HP's Itanium system sales were at an annual rate of $4.4Bn at the end of 2008, and declined to $3.5Bn by the end of 2009, compared to a 35% decline in UNIX system revenue for Sun and an 11% drop for IBM, with an x86-64 server revenue increase of 14% during this period.", "title": "Market share" }, { "paragraph_id": 44, "text": "In December 2012, IDC released a research report stating that Itanium server shipments would remain flat through 2016, with annual shipment of 26,000 systems (a decline of over 50% compared to shipments in 2008).", "title": "Market share" }, { "paragraph_id": 45, "text": "By 2006, HP manufactured at least 80% of all Itanium systems, and sold 7,200 in the first quarter of 2006. The bulk of systems sold were enterprise servers and machines for large-scale technical computing, with an average selling price per system in excess of US$200,000. A typical system uses eight or more Itanium processors.", "title": "Hardware support" }, { "paragraph_id": 46, "text": "By 2012, only a few manufacturers offered Itanium systems, including HP, Bull, NEC, Inspur and Huawei. In addition, Intel offered a chassis that could be used by system integrators to build Itanium systems.", "title": "Hardware support" }, { "paragraph_id": 47, "text": "By 2015, only HP supplied Itanium-based systems. With HP split in late 2015, Itanium systems (branded as Integrity) are handled by Hewlett Packard Enterprise (HPE), with a major update in 2017 (Integrity i6, and HP-UX 11i v3 Update 16). HPE also supports a few other operating systems, including Windows up to Server 2008 R2, Linux, OpenVMS and NonStop. Itanium is not affected by Spectre and Meltdown.", "title": "Hardware support" }, { "paragraph_id": 48, "text": "Prior to the 9300-series (Tukwila), chipsets were needed to connect to the main memory and I/O devices, as the front-side bus to the chipset was the sole connection out of the processor (except for TAP (JTAG) and SMBus for debugging and system configuration). Two generations of buses existed, the original Itanium processor system bus (a.k.a. Merced bus) had a 64 bit data width and 133 MHz clock with DDR (266 MT/s), being soon superseded by the 128-bit 200 MHz DDR (400 MT/s) Itanium 2 processor system bus (a.k.a. McKinley bus), which later reached 533 and 667 MT/s. Up to four CPUs per single bus could be used, but prior to the 9000-series the bus speeds of over 400 MT/s were limited to up to two processors per bus. As no Itanium chipset could connect to more than four sockets, high-end servers needed multiple interconnected chipsets.", "title": "Hardware support" }, { "paragraph_id": 49, "text": "The \"Tukwila\" Itanium processor model had been designed to share a common chipset with the Intel Xeon processor EX (Intel's Xeon processor designed for four processor and larger servers). The goal was to streamline system development and reduce costs for server OEMs, many of which develop both Itanium- and Xeon-based servers. However, in 2013, this goal was pushed back to be \"evaluated for future implementation opportunities\".", "title": "Hardware support" }, { "paragraph_id": 50, "text": "In the times before on-chip memory controllers and QPI, enterprise server manufacturers differentiated their systems by designing and developing chipsets that interface the processor to memory, interconnections, and peripheral controllers. \"Enterprise server\" referred to the then-lucrative market segment of high-end servers with high reliability, availability and serviceability and typically 16+ processor sockets, justifying their pricing by having a custom system-level architecture with their own chipsets at its heart, with capabilities far beyond what two-socket \"commodity servers\" could offer. Development of a chipset costs tens of millions of dollars and so represented a major commitment to the use of Itanium.", "title": "Hardware support" }, { "paragraph_id": 51, "text": "Neither Intel nor IBM would develop Itanium 2 chipsets to support newer technologies such as DDR2 or PCI Express. Before \"Tukwila\" moved away from the FSB, chipsets supporting such technologies were manufactured by all Itanium server vendors, such as HP, Fujitsu, SGI, NEC, and Hitachi.", "title": "Hardware support" }, { "paragraph_id": 52, "text": "The first generation of Itanium received no vendor-specific chipsets, only Intel's 460GX consisting of ten distinct chips. It supported up to four CPUs and 64 GB of memory at 4.2 GB/s, which is twice the system bus's bandwidth. Addresses and data were handled by two different chips. 460GX had an AGP X4 graphics bus, two 64-bit 66 MHz PCI buses and configurable 33 MHz dual 32-bit or single 64-bit PCI bus(es).", "title": "Hardware support" }, { "paragraph_id": 53, "text": "There were many custom chipset designs for Itanium 2, but many smaller vendors chose to use Intel's E8870 chipset. It supports 128 GB of DDR SDRAM at 6.4 GB/s. It was originally designed for Rambus RDRAM serial memory, but when RDRAM failed to succeed Intel added four DDR SDRAM-to-RDRAM converter chips to the chipset. When Intel had previously made such a converter for Pentium III chipsets 820 and 840, it drastically cut performance. E8870 provides eight 133 MHz PCI-X buses (4.2 GB/s total because of bottlenecks) and a ICH4 hub with six USB 2.0 ports. Two E8870 can be linked together by two E8870SP Scalability Port Switches, each containing a 1MB (~200,000 cache lines) snoop filter, to create an 8-socket system with double the memory and PCI-X capacity, but still only one ICH4. Further expansion to 16 sockets was planned. In 2004 Intel revealed plans for its next Itanium chipset, codenamed Bayshore, to support PCI-e and DDR2 memory, but canceled it the same year.", "title": "Hardware support" }, { "paragraph_id": 54, "text": "HP has designed four different chipsets for Itanium 2: zx1, sx1000, zx2 and sx2000. All support 4 sockets per chipset, but sx1000 and sx2000 support interconnection of up to 16 chipsets to create up to a 64 socket system. As it was developed in collaboration with Itanium 2's development, booting the first Itanium 2 in February 2001, zx1 became the first Itanium 2 chipset available and later in 2004 also the first to support 533 MT/s FSB. In its basic two-chip version it directly provides four channels of DDR-266 memory, giving 8.5 GB/s of bandwidth and 32 GB of capacity (though 12 DIMM slots). In versions with memory expander boards memory bandwidth reaches 12.8 GB/s, while the maximum capacity for the initial two-board 48 DIMM expanders was 96 GB, and the later single-board 32 DIMM expander up to 128 GB. The memory latency increases by 25 nanoseconds from 80 ns due to the expanders. Eight independent links went to the PCI-X and other peripheral devices (e.g. AGP in workstations), totaling 4 GB/s.", "title": "Hardware support" }, { "paragraph_id": 55, "text": "HP's first high-end Itanium chipset was sx1000, launched in mid-2003 with the Integrity Superdome flagship server. It has two independent front-side buses, each bus supporting two sockets, giving 12.8 GB/s of combined bandwidth from the processors to the chipset. It has four links to data-only memory buffers and supports 64 GB of HP-designed 125 MHz memory at 16 GB/s. The above components form a system board called a cell. Two cells can be directly connected together to create an 8-socket glueless system. To connect four cells together, a pair of 8-ported crossbar switches is needed (adding 64 ns to inter-cell memory accesses), while four such pairs of crossbar switches are needed for the top-end system of 16 cells (64 sockets), giving 32 GB/s of bisection bandwidth. Cells maintain cache coherence through in-memory directories, which causes the minimum memory latency to be 241 ns. The latency to the most remote (NUMA) memory is 463 ns. The per-cell bandwidth to the I/O subsystems is 2 GB/s, despite the presence of 8 GB/s worth of PCI-X buses in each I/O subsystem.", "title": "Hardware support" }, { "paragraph_id": 56, "text": "HP launched sx2000 in March 2006 to succeed sx1000. Its two FSBs operate at 533 MT/s. It supports up to 128 GB of memory at 17 GB/s. The memory is of HP's custom design, using the DDR2 protocol, but twice as tall as the standard modules and with redundant address and control signal contacts. For the inter-chipset communication, 25.5 GB/s is available on each sx2000 through its three serial links that can connect to a set of three independent crossbars, which connect to other cells or up to 3 other sets of 3 crossbars. The multi-cell configurations are the same as with sx1000, except the parallelism of the sets of crossbars has been increased from 2 to 3. The maximum configuration of 64 sockets has 72 GB/s of sustainable bisection bandwidth. The chipset's connection to its I/O module is now serial with an 8.5 GB/s peak and 5.5 GB/s sustained bandwidth, the I/O module having either 12 PCI-X buses at up to 266 MHz, or 6 PCI-X buses and 6 PCIe 1.1 ×8 slots. It is the last chipset to support HP's PA-RISC processors (PA-8900).", "title": "Hardware support" }, { "paragraph_id": 57, "text": "HP launched the first zx2-based servers in September 2006. zx2 can operate the FSB at 667 MT/s with two CPUs or 533 MT/s with four CPUs. It connects to the DDR2 memory either directly, supporting 32 GB at up to 14.2 GB/s, or through expander boards, supporting up to 384 GB at 17 GB/s. The minimum open-page latency is 60 to 78 ns. 9.8 GB/s are available through eight independent links to the I/O adapters, which can include PCIe ×8 or 266 MHz PCI-X.", "title": "Hardware support" }, { "paragraph_id": 58, "text": "In May 2003, IBM launched the XA-64 chipset for Itanium 2. It used many of the same technologies as the first two generations of XA-32 chipsets for Xeon, but by the time of the third gen XA-32 IBM had decided to discontinue its Itanium products. XA-64 supported 56 GB of DDR SDRAM in 28 slots at 6.4 GB/s, though due to bottlenecks only 3.2 GB/s could go to the CPU and other 2 GB/s to devices for a 5.2 GB/s total. The CPU's memory bottleneck was mitigated by an off-chip 64 MB DRAM L4 cache, which also worked as a snoop filter in multi-chipset systems. The combined bandwidth of the four PCI-X buses and other I/O is bottlenecked to 2 GB/s per chipset. Two or four chipsets can be connected to make an 8 or 16 socket system.", "title": "Hardware support" }, { "paragraph_id": 59, "text": "SGI's Altix supercomputers and servers used the SHUB (Super-Hub) chipset, which supports two Itanium 2 sockets. The initial version used DDR memory through four buses for up to 12.8 GB/s bandwidth, and up to 32 GB of capacity across 16 slots. A 2.4 GB/s XIO channel connected to a module with up to six 64-bit 133 MHz PCI-X buses. SHUBs can be interconnected by the dual 6.4 GB/s NUMAlink4 link planes to create a 512-socket cache-coherent single-image system. A cache for the in-memory coherence directory saves memory bandwidth and reduces latency. The latency to the local memory is 132 ns, and each crossing of a NUMAlink4 router adds 50 ns. I/O modules with four 133 MHz PCI-X buses can connect directly to the NUMAlink4 network. SGI's second-generation SHUB 2.0 chipset supported up to 48 GB of DDR2 memory, 667 MT/s FSB, and could connect to I/O modules providing PCI Express. It supports only four local threads, so when having two dual-core CPUs per chipset, Hyper-Threading must be disabled.", "title": "Hardware support" }, { "paragraph_id": 60, "text": "The Trillian Project was an effort by an industry consortium to port the Linux kernel to the Itanium processor. The project started in May 1999 with the goal of releasing the distribution in time for the initial release of Itanium, then scheduled for early 2000. By the end of 1999, the project included Caldera Systems, CERN, Cygnus Solutions, Hewlett-Packard, IBM, Intel, Red Hat, SGI, SuSE, TurboLinux and VA Linux Systems. The project released the resulting code in February 2000. The code then became part of the mainline Linux kernel more than a year before the release of the first Itanium processor. The Trillian project was able to do this for two reasons:", "title": "Software support" }, { "paragraph_id": 61, "text": "After the successful completion of Project Trillian, the resulting Linux kernel was used by all of the manufacturers of Itanium systems (HP, IBM, Dell, SGI, Fujitsu, Unisys, Hitachi, and Groupe Bull.) With the notable exception of HP, Linux is either the primary OS or the only OS the manufacturer supports for Itanium. Ongoing free and open source software support for Linux on Itanium subsequently coalesced at Gelato.", "title": "Software support" }, { "paragraph_id": 62, "text": "In 2005, Fedora Linux started adding support for Itanium and Novell added support for SUSE Linux. In 2007, CentOS added support for Itanium in a new release.", "title": "Software support" }, { "paragraph_id": 63, "text": "In 2009, Red Hat dropped Itanium support in Enterprise Linux 6. Ubuntu 10.10 dropped support for Itanium. In 2021, Linus Torvalds marked the Itanium code as orphaned. Torvalds said:", "title": "Software support" }, { "paragraph_id": 64, "text": "\"HPE no longer accepts orders for new Itanium hardware, and Intel stopped accepting orders a year ago. While intel is still officially shipping chips until July 29, 2021, it's unlikely that any such orders actually exist. It's dead, Jim.\"", "title": "Software support" }, { "paragraph_id": 65, "text": "Support for Itanium was removed in Linux 6.7.", "title": "Software support" }, { "paragraph_id": 66, "text": "In 2001, Compaq announced that OpenVMS would be ported to the Itanium architecture. This led to the creation of the V8.x releases of OpenVMS, which support both Itanium-based HPE Integrity Servers and DEC Alpha hardware. Since the Itanium porting effort began, ownership of OpenVMS transferred from Compaq to HP in 2001, and then to VMS Software Inc. (VSI) in 2014. Noteworthy releases include:", "title": "Software support" }, { "paragraph_id": 67, "text": "Support for Itanium has been dropped in the V9.x releases of OpenVMS, which run on x86-64 only.", "title": "Software support" }, { "paragraph_id": 68, "text": "NonStop OS was ported from MIPS-based hardware to Itanium in 2005. NonStop OS was later ported to x86-64 in 2015. Sales of Itanium-based NonStop hardware ended in 2020, with support ending in 2025.", "title": "Software support" }, { "paragraph_id": 69, "text": "GNU Compiler Collection deprecated support for IA-64 in GCC 10, after Intel announced the planned phase-out of this ISA. LLVM (Clang) dropped Itanium support in version 2.6.", "title": "Software support" }, { "paragraph_id": 70, "text": "HP sells a virtualization technology for Itanium called Integrity Virtual Machines.", "title": "Software support" }, { "paragraph_id": 71, "text": "Emulation is a technique that allows a computer to execute binary code that was compiled for a different type of computer. Before IBM's acquisition of QuickTransit in 2009, application binary software for IRIX/MIPS and Solaris/SPARC could run via type of emulation called \"dynamic binary translation\" on Linux/Itanium. Similarly, HP implemented a method to execute PA-RISC/HP-UX on the Itanium/HP-UX via emulation, to simplify migration of its PA-RISC customers to the radically different Itanium instruction set. Itanium processors can also run the mainframe environment GCOS from Groupe Bull and several x86 operating systems via instruction set simulators.", "title": "Software support" }, { "paragraph_id": 72, "text": "Itanium was aimed at the enterprise server and high-performance computing (HPC) markets. Other enterprise- and HPC-focused processor lines include Oracle's and Fujitsu's SPARC processors and IBM's Power microprocessors. Measured by quantity sold, Itanium's most serious competition came from x86-64 processors including Intel's own Xeon line and AMD's Opteron line. Since 2009, most servers were being shipped with x86-64 processors.", "title": "Competition" }, { "paragraph_id": 73, "text": "In 2005, Itanium systems accounted for about 14% of HPC systems revenue, but the percentage declined as the industry shifted to x86-64 clusters for this application.", "title": "Competition" }, { "paragraph_id": 74, "text": "An October 2008 Gartner report on the Tukwila processor stated that \"...the future roadmap for Itanium looks as strong as that of any RISC peer like Power or SPARC.\"", "title": "Competition" }, { "paragraph_id": 75, "text": "An Itanium-based computer first appeared on the list of the TOP500 supercomputers in November 2001. The best position ever achieved by an Itanium 2 based system in the list was No. 2, achieved in June 2004, when Thunder (Lawrence Livermore National Laboratory) entered the list with an Rmax of 19.94 Teraflops. In November 2004, Columbia entered the list at No. 2 with 51.8 Teraflops, and there was at least one Itanium-based computer in the top 10 from then until June 2007. The peak number of Itanium-based machines on the list occurred in the November 2004 list, at 84 systems (16.8%); by June 2012, this had dropped to one system (0.2%), and no Itanium system remained on the list in November 2012.", "title": "Supercomputers and high-performance computing" }, { "paragraph_id": 76, "text": "The Itanium processors show a progression in capability. Merced was a proof of concept. McKinley dramatically improved the memory hierarchy and allowed Itanium to become reasonably competitive. Madison, with the shift to a 130 nm process, allowed for enough cache space to overcome the major performance bottlenecks. Montecito, with a 90 nm process, allowed for a dual-core implementation and a major improvement in performance per watt. Montvale added three new features: core-level lockstep, demand-based switching and front-side bus frequency of up to 667 MHz.", "title": "Processors" }, { "paragraph_id": 77, "text": "When first released in 2001, Itanium's performance was disappointing compared to better-established RISC and CISC processors. Emulation to run existing x86 applications and operating systems was particularly poor, with one benchmark in 2001 reporting that it was equivalent at best to a 100 MHz Pentium in this mode (1.1 GHz Pentiums were on the market at that time). Itanium failed to make significant inroads against IA-32 or RISC, and suffered further following the arrival of x86-64 systems which offered greater compatibility with older x86 applications.", "title": "Market reception" }, { "paragraph_id": 78, "text": "In a 2009 article on the history of the processor — \"How the Itanium Killed the Computer Industry\" — journalist John C. Dvorak reported \"This continues to be one of the great fiascos of the last 50 years\". Tech columnist Ashlee Vance commented that the delays and underperformance \"turned the product into a joke in the chip industry\". In an interview, Donald Knuth said \"The Itanium approach...was supposed to be so terrific—until it turned out that the wished-for compilers were basically impossible to write.\"", "title": "Market reception" }, { "paragraph_id": 79, "text": "Both Red Hat and Microsoft announced plans to drop Itanium support in their operating systems due to lack of market interest; however, other Linux distributions such as Gentoo and Debian remain available for Itanium. On March 22, 2011, Oracle Corporation announced that it would no longer develop new products for HP-UX on Itanium, although it would continue to provide support for existing products. Following this announcement, HP sued Oracle for breach of contract, arguing that Oracle had violated conditions imposed during settlement over Oracle's hiring of former HP CEO Mark Hurd as its co-CEO, requiring the vendor to support Itanium on its software \"until such time as HP discontinues the sales of its Itanium-based servers\", and that the breach had harmed its business. In 2012, a court ruled in favor of HP, and ordered Oracle to resume its support for Itanium. In June 2016, Hewlett Packard Enterprise (the corporate successor to HP's server business) was awarded $3 billion in damages from the lawsuit. Oracle unsuccessfully appealed the decision to the California Court of Appeal in 2021.", "title": "Market reception" }, { "paragraph_id": 80, "text": "A former Intel official reported that the Itanium business had become profitable for Intel in late 2009. By 2009, the chip was almost entirely deployed on servers made by HP, which had over 95% of the Itanium server market share, making the main operating system for Itanium HP-UX. On March 22, 2011, Intel reaffirmed its commitment to Itanium with multiple generations of chips in development and on schedule.", "title": "Market reception" }, { "paragraph_id": 81, "text": "Although Itanium did attain limited success in the niche market of high-end computing, Intel had originally hoped it would find broader acceptance as a replacement for the original x86 architecture.", "title": "Market reception" }, { "paragraph_id": 82, "text": "AMD chose a different direction, designing the less radical x86-64, a 64-bit extension to the existing x86 architecture, which Microsoft then supported, forcing Intel to introduce the same extensions in its own x86-based processors. These designs can run existing 32-bit applications at native hardware speed, while offering support for 64-bit memory addressing and other enhancements to new applications. This architecture has now become the predominant 64-bit architecture in the desktop and portable market. Although some Itanium-based workstations were initially introduced by companies such as SGI, they are no longer available.", "title": "Market reception" }, { "paragraph_id": 83, "text": "1989", "title": "Timeline" }, { "paragraph_id": 84, "text": "1994", "title": "Timeline" }, { "paragraph_id": 85, "text": "1995", "title": "Timeline" }, { "paragraph_id": 86, "text": "1996", "title": "Timeline" }, { "paragraph_id": 87, "text": "1997", "title": "Timeline" }, { "paragraph_id": 88, "text": "1998", "title": "Timeline" }, { "paragraph_id": 89, "text": "1999", "title": "Timeline" }, { "paragraph_id": 90, "text": "2000", "title": "Timeline" }, { "paragraph_id": 91, "text": "2001", "title": "Timeline" }, { "paragraph_id": 92, "text": "2002", "title": "Timeline" }, { "paragraph_id": 93, "text": "2003", "title": "Timeline" }, { "paragraph_id": 94, "text": "2004", "title": "Timeline" }, { "paragraph_id": 95, "text": "2005", "title": "Timeline" }, { "paragraph_id": 96, "text": "2006", "title": "Timeline" }, { "paragraph_id": 97, "text": "2007", "title": "Timeline" }, { "paragraph_id": 98, "text": "2009", "title": "Timeline" }, { "paragraph_id": 99, "text": "2010", "title": "Timeline" }, { "paragraph_id": 100, "text": "2011", "title": "Timeline" }, { "paragraph_id": 101, "text": "2012", "title": "Timeline" }, { "paragraph_id": 102, "text": "2013", "title": "Timeline" }, { "paragraph_id": 103, "text": "2014", "title": "Timeline" }, { "paragraph_id": 104, "text": "2017", "title": "Timeline" }, { "paragraph_id": 105, "text": "2019", "title": "Timeline" }, { "paragraph_id": 106, "text": "2020", "title": "Timeline" }, { "paragraph_id": 107, "text": "2021", "title": "Timeline" }, { "paragraph_id": 108, "text": "", "title": "External links" } ]
Itanium is a discontinued family of 64-bit Intel microprocessors that implement the Intel Itanium architecture. The Itanium architecture originated at Hewlett-Packard (HP), and was later jointly developed by HP and Intel. Launched in June 2001, Intel initially marketed the processors for enterprise servers and high-performance computing systems. In the concept phase, engineers said "we could run circles around PowerPC, that we could kill the x86." Early predictions were that IA-64 would expand to the lower-end servers, supplanting Xeon, and eventually penetrate into the personal computers, eventually to supplant reduced instruction set computing (RISC) and complex instruction set computing (CISC) architectures for all general-purpose applications. When first released in 2001, Itanium's performance was disappointing compared to better-established RISC and CISC processors. Emulation to run existing x86 applications and operating systems was particularly poor. Itanium-based systems were produced by HP and its successor Hewlett Packard Enterprise (HPE) as the Integrity Servers line, and by several other manufacturers. In 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power ISA, and SPARC. In February 2017, Intel released the final generation, Kittson, to test customers, and in May began shipping in volume. It was used exclusively in mission-critical servers from HPE. In 2019, Intel announced that new orders for Itanium would be accepted until January 30, 2020, and shipments would cease by July 29, 2021. This took place on schedule. Itanium never sold well outside enterprise servers and high-performance computing systems, and the architecture was ultimately supplanted by competitor AMD’s x86-64 architecture. x86-64 is a compatible extension to the 32-bit x86 architecture, implemented by, for example, Intel's own Xeon line and AMD's Opteron line. Since 2009, most servers were being shipped with x86-64 processors, and they dominate the low cost desktop and laptop markets which were not initially targeted by Itanium. In an article titled "Intel's Itanium is finally dead: The Itanic sunken by the x86 juggernaut" Techspot declared "Itanium's promise ended up sunken by a lack of legacy 32-bit support and difficulties in working with the architecture for writing and maintaining software" while the dream of a single dominant ISA would be realized by the AMD64 extensions.
2001-12-29T08:00:09Z
2023-12-29T13:30:29Z
[ "Template:Intel processors", "Template:0", "Template:Cite CiteSeerX", "Template:Good article", "Template:Respell", "Template:Update inline", "Template:Main", "Template:Cite book", "Template:IPAc-en", "Template:Notelist", "Template:Cite journal", "Template:Further", "Template:Zwsp", "Template:Nowrap", "Template:Cite news", "Template:Authority control", "Template:Short description", "Template:Efn", "Template:Refn", "Template:Reflist", "Template:Cite magazine", "Template:Cite conference", "Template:Cite press release", "Template:Commons category", "Template:Webarchive", "Template:CPU technologies", "Template:Infobox CPU", "Template:Dunno", "Template:Cite web", "Template:Cite newsgroup" ]
https://en.wikipedia.org/wiki/Itanium
15,459
International Classification of Diseases
The International Classification of Diseases (ICD) is a globally used medical classification used in epidemiology, health management and for clinical purposes. The ICD is maintained by the World Health Organization (WHO), which is the directing and coordinating authority for health within the United Nations System. The ICD is originally designed as a health care classification system, providing a system of diagnostic codes for classifying diseases, including nuanced classifications of a wide variety of signs, symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or disease. This system is designed to map health conditions to corresponding generic categories together with specific variations, assigning for these a designated code, up to six characters long. Thus, major categories are designed to include a set of similar diseases. The ICD is published by the WHO and used worldwide for morbidity and mortality statistics, reimbursement systems, and automated decision support in health care. This system is designed to promote international comparability in the collection, processing, classification, and presentation of these statistics. The ICD is a major project to statistically classify all health disorders, and provide diagnostic assistance. The ICD is a core statistically based classificatory diagnostic system for health care related issues of the WHO Family of International Classifications (WHO-FIC). The ICD is revised periodically and is currently in its 11th revision. The ICD-11, as it is therefore known, was accepted by WHO's World Health Assembly (WHA) on 25 May 2019 and officially came into effect on 1 January 2022. On 11 February 2022, the WHO stated that 35 countries were using the ICD-11. The ICD is part of a "family" of international classifications (WHOFIC) that complement each other, also including the International Classification of Functioning, Disability and Health (ICF) which focuses on the domains of functioning (disability) associated with health conditions, from both medical and social perspectives, and the International Classification of Health Interventions (ICHI) that classifies the whole range of medical, nursing, functioning and public health interventions. The title of the ICD is formally the International Statistical Classification of Diseases and Related Health Problems, although the original title, International Classification of Diseases, is still informally the name by which it is usually known. In the United States and some other countries, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is preferred for the classification of mental disorders for some purposes. In 1860, during the international statistical congress held in London, Florence Nightingale made a proposal that was to result in the development of the first model of systematic collection of hospital data. In 1893, a French physician, Jacques Bertillon, introduced the Bertillon Classification of Causes of Death at a congress of the International Statistical Institute in Chicago. A number of countries adopted Bertillon's system, which was based on the principle of distinguishing between general diseases and those localized to a particular organ or anatomical site, as used by the City of Paris for classifying deaths. Subsequent revisions represented a synthesis of English, German, and Swiss classifications, expanding from the original 44 titles to 161 titles. In 1898, the American Public Health Association (APHA) recommended that the registrars of Canada, Mexico, and the United States also adopt it. The APHA also recommended revising the system every 10 years to ensure the system remained current with medical practice advances. As a result, the first international conference to revise the International Classification of Causes of Death took place in 1900, with revisions occurring every ten years thereafter. At that time, the classification system was contained in one book, which included an Alphabetic Index as well as a Tabular List. The book was small compared with current coding texts. The revisions that followed contained minor changes, until the sixth revision of the classification system. With the sixth revision, the classification system expanded to two volumes. The sixth revision included morbidity and mortality conditions, and its title was modified to reflect the changes: International Statistical Classification of Diseases, Injuries and Causes of Death (ICD). Prior to the sixth revision, responsibility for ICD revisions fell to the Mixed Commission, a group composed of representatives from the International Statistical Institute and the Health Organization of the League of Nations. In 1948, the WHO assumed responsibility for preparing and publishing the revisions to the ICD every ten years. WHO sponsored the seventh and eighth revisions in 1957 and 1968, respectively. It later became clear that the established ten year interval between revisions was too short. The ICD is currently the most widely used statistical classification system for diseases in the world. In addition, some countries—including Australia, Canada, and the United States—have developed their own adaptations of ICD, with more procedure codes for classification of operative or diagnostic procedures. The ICD-6, published in 1949, was the first to be shaped to become suitable for morbidity reporting. Accordingly, the name changed from International List of Causes of Death to International Statistical Classification of Diseases. The combined code section for injuries and their associated accidents was split into two, a chapter for injuries, and a chapter for their external causes. With use for morbidity there was a need for coding mental conditions, and for the first time a section on mental disorders was added . The International Conference for the Seventh Revision of the International Classification of Diseases was held in Paris under the auspices of WHO in February 1955. In accordance with a recommendation of the WHO Expert Committee on Health Statistics, this revision was limited to essential changes and amendments of errors and inconsistencies. The 8th Revision Conference convened by WHO met in Geneva, from 6 to 12 July 1965. This revision was more radical than the Seventh but left unchanged the basic structure of the Classification and the general philosophy of classifying diseases, whenever possible, according to their etiology rather than a particular manifestation. During the years that the Seventh and Eighth Revisions of the ICD were in force, the use of the ICD for indexing hospital medical records increased rapidly and some countries prepared national adaptations which provided the additional detail needed for this application of the ICD. In the US, a group of consultants was asked to study the ICD-8 for its applicability to various users in the United States. This group recommended that further detail be provided for coding hospital and morbidity data. The American Hospital Association's "Advisory Committee to the Central Office on ICDA" developed the needed adaptation proposals, resulting in the publication of the International Classification of Diseases, Adapted (ICDA). In 1968, the United States Public Health Service published the International Classification of Diseases, Adapted, 8th Revision for use in the United States (ICDA-8). Beginning in 1968, ICDA-8 served as the basis for coding diagnostic data for both official morbidity and mortality statistics in the United States. The International Conference for the Ninth Revision of the International Statistical Classification of Diseases, Injuries, and Causes of Death, convened by WHO, met in Geneva from 30 September to 6 October 1975. In the discussions leading up to the conference, it had originally been intended that there should be little change other than updating of the classification. This was mainly because of the expense of adapting data processing systems each time the classification was revised. There had been an enormous growth of interest in the ICD and ways had to be found of responding to this, partly by modifying the classification itself and partly by introducing special coding provisions. A number of representations were made by specialist bodies which had become interested in using the ICD for their own statistics. Some subject areas in the classification were regarded as inappropriately arranged and there was considerable pressure for more detail and for adaptation of the classification to make it more relevant for the evaluation of medical care, by classifying conditions to the chapters concerned with the part of the body affected rather than to those dealing with the underlying generalized disease. At the other end of the scale, there were representations from countries and areas where a detailed and sophisticated classification was irrelevant, but which nevertheless needed a classification based on the ICD in order to assess their progress in health care and in the control of disease. A field test with a bi-axial classification approach—one axis (criterion) for anatomy, with another for etiology—showed the impracticability of such approach for routine use. The final proposals presented to and accepted by the Conference in 1978 retained the basic structure of the ICD, although with much additional detail at the level of the four digit subcategories, and some optional five digit subdivisions. For the benefit of users not requiring such detail, care was taken to ensure that the categories at the three digit level were appropriate. As the World Health Organization explains: "For the benefit of users wishing to produce statistics and indexes oriented towards medical care, the 9th Revision included an optional alternative method of classifying diagnostic statements, including information about both an underlying general disease and a manifestation in a particular organ or site. This system became known as the 'dagger and asterisk system' and is retained in the Tenth Revision. A number of other technical innovations were included in the Ninth Revision, aimed at increasing its flexibility for use in a variety of situations." It was eventually replaced by ICD-10, the version currently in use by the WHO and most countries. Given the widespread expansion in the tenth revision, it is not possible to convert ICD-9 data sets directly into ICD-10 data sets, although some tools are available to help guide users. Publication of ICD-9 without IP restrictions in a world with evolving electronic data systems led to a range of products based on ICD-9, such as MeDRA or the Read directory. When ICD-9 was published by the World Health Organization (WHO), the International Classification of Procedures in Medicine (ICPM) was also developed (1975) and published (1978). The ICPM surgical procedures fascicle was originally created by the United States, based on its adaptations of ICD (called ICDA), which had contained a procedure classification since 1962. ICPM is published separately from the ICD disease classification as a series of supplementary documents called fascicles (bundles or groups of items). Each fascicle contains a classification of modes of laboratory, radiology, surgery, therapy, and other diagnostic procedures. Many countries have adapted and translated the ICPM in parts or as a whole and are using it with amendments since then. The International Classification of Diseases, Clinical Modification (ICD-9-CM) was an adaptation created by the US National Center for Health Statistics (NCHS) and used in assigning diagnostic and procedure codes associated with inpatient, outpatient, and physician office utilization in the United States. The ICD-9-CM is based on the ICD-9 but provides for additional morbidity detail. It was updated annually on October 1. It consists three volumes: The NCHS and the Centers for Medicare and Medicaid Services are the US governmental agencies responsible for overseeing all changes and modifications to the ICD-9-CM. Work on ICD-10 began in 1983, and the new revision was endorsed by the Forty-third World Health Assembly in May 1990. The latest version came into use in WHO Member States starting in 1994. The classification system allows more than 55,000 different codes and permits tracking of many new diagnoses and procedures, a significant expansion on the 17,000 codes available in ICD-9. Adoption was relatively swift in most of the world. Several materials are made available online by WHO to facilitate its use, including a manual, training guidelines, a browser, and files for download. Some countries have adapted the international standard, such as the "ICD-10-AM" published in Australia in 1998 (also used in New Zealand), and the "ICD-10-CA" introduced in Canada in 2000. Adoption of ICD-10-CM was slow in the United States. Since 1979, the US had required ICD-9-CM codes for Medicare and Medicaid claims, and most of the rest of the American medical industry followed suit. On 1 January 1999 the ICD-10 (without clinical extensions) was adopted for reporting mortality, but ICD-9-CM was still used for morbidity. Meanwhile, NCHS received permission from the WHO to create a clinical modification of the ICD-10, and has production of all these systems: On 21 August 2008, the US Department of Health and Human Services (HHS) proposed new code sets to be used for reporting diagnoses and procedures on health care transactions. Under the proposal, the ICD-9-CM code sets would be replaced with the ICD-10-CM code sets, effective 1 October 2013. On 17 April 2012 the Department of Health and Human Services (HHS) published a proposed rule that would delay, from 1 October 2013 to 1 October 2014, the compliance date for the ICD-10-CM and PCS. Once again, Congress delayed implementation date to 1 October 2015, after it was inserted into "Doc Fix" Bill without debate over objections of many. Revisions to ICD-10-CM Include: ICD-10-CA is a clinical modification of ICD-10 developed by the Canadian Institute for Health Information for morbidity classification in Canada. ICD-10-CA applies beyond acute hospital care, and includes conditions and situations that are not diseases but represent risk factors to health, such as occupational and environmental factors, lifestyle and psycho-social circumstances. The eleventh revision of the International Classification of Diseases, or the ICD-11, is almost five times as big as the ICD-10. It was created following a decade of development involving over 300 specialists from 55 countries. Following an alpha version in May 2011 and a beta draft in May 2012, a stable version of the ICD-11 was released on 18 June 2018, and officially endorsed by all WHO members during the 72nd World Health Assembly on 25 May 2019. For the ICD-11, the WHO decided to differentiate between the core of the system and its derived specialty versions, such as the ICD-O for oncology. As such, the collection of all ICD entities is called the Foundation Component. From this common core, subsets can be derived. The primary derivative of the Foundation is called the ICD-11 MMS, and it is this system that is commonly referred to and recognized as "the ICD-11". MMS stands for Mortality and Morbidity Statistics. ICD-11 comes with an implementation package that includes transition tables from and to ICD-10, a translation tool, a coding tool, web-services, a manual, training material, and more. All tools are accessible after self-registration from the Maintenance Platform. The ICD-11 officially came into effect on 1 January 2022, although the WHO admitted that "not many countries are likely to adapt that quickly". In the United States, the advisory body of the Secretary of Health and Human Services has given an expected release year of 2025, but if a clinical modification is determined to be needed (similar to the ICD-10-CM), this could become 2027. In the United States, the US Public Health Service published The International Classification of Diseases, Adapted for Indexing of Hospital Records and Operation Classification (ICDA), completed in 1962 and expanding the ICD-7 in a number of areas to more completely meet the indexing needs of hospitals. The US Public Health Service later published the Eighth Revision, International Classification of Diseases, Adapted for Use in the United States, commonly referred to as ICDA-8, for official national morbidity and mortality statistics. This was followed by the ICD, 9th Revision, Clinical Modification, known as ICD-9-CM, published by the US Department of Health and Human Services and used by hospitals and other healthcare facilities to better describe the clinical picture of the patient. The diagnosis component of ICD-9-CM is completely consistent with ICD-9 codes, and remains the data standard for reporting morbidity. National adaptations of the ICD-10 progressed to incorporate both clinical code (ICD-10-CM) and procedure code (ICD-10-PCS) with the revisions completed in 2003. In 2009, the US Centers for Medicare and Medicaid Services announced that it would begin using ICD-10 on April 1, 2010, with full compliance by all involved parties by 2013. However, the US extended the deadline twice and did not formally require transitioning to ICD-10-CM (for most clinical encounters) until October 1, 2015. The years for which causes of death in the United States have been classified by each revision as follows: Cause of death on United States death certificates, statistically compiled by the Centers for Disease Control and Prevention (CDC), are coded in the ICD, which does not include codes for human and system factors commonly called medical errors. The various ICD editions include sections that classify mental and behavioural disorders. The ICD-10 Classification of Mental and Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines – also known as the "blue book" – is derived from Chapter V of ICD-10 and gives the diagnostic criteria for the conditions listed at each category therein. The blue book was developed separately to, but coexists with, the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association—though both seek to use the same diagnostic classifications. A survey of psychiatrists in 66 countries comparing use of the ICD-10 and DSM-IV found that the former was more often used for clinical diagnosis while the latter was more valued for research. As part of the development of the ICD-11, WHO established an "International Advisory Group" to guide what would become the chapter on "Mental, behavioural or neurodevelopmental disorders". The working group proposed that ICD-11 should declassify the categories within ICD-10 at "F66 Psychological and behavioural disorders that are associated with sexual development and orientation". The group reported to WHO that there was "no evidence" these classifications were clinically useful, as they do not "contribute to health service delivery or treatment selection nor provide essential information for public health surveillance." Adding that; despite ICD-10 explicitly stating "sexual orientation by itself is not to be considered a disorder", the inclusion of such categories "suggest that mental disorders exist that are uniquely linked to sexual orientation and gender expression." A position already recognised by the DSM, as well as other classification systems. The ICD is actually the official system for the US, although many mental health professionals do not realize this due to the dominance of the DSM. A psychologist has stated: "Serious problems with the clinical utility of both the ICD and the DSM are widely acknowledged." Note: Since adoption of ICD-10 CM in the US, several online tools have been mushrooming. They all refer to that particular modification and thus are not linked here.
[ { "paragraph_id": 0, "text": "The International Classification of Diseases (ICD) is a globally used medical classification used in epidemiology, health management and for clinical purposes. The ICD is maintained by the World Health Organization (WHO), which is the directing and coordinating authority for health within the United Nations System. The ICD is originally designed as a health care classification system, providing a system of diagnostic codes for classifying diseases, including nuanced classifications of a wide variety of signs, symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or disease. This system is designed to map health conditions to corresponding generic categories together with specific variations, assigning for these a designated code, up to six characters long. Thus, major categories are designed to include a set of similar diseases.", "title": "" }, { "paragraph_id": 1, "text": "The ICD is published by the WHO and used worldwide for morbidity and mortality statistics, reimbursement systems, and automated decision support in health care. This system is designed to promote international comparability in the collection, processing, classification, and presentation of these statistics. The ICD is a major project to statistically classify all health disorders, and provide diagnostic assistance. The ICD is a core statistically based classificatory diagnostic system for health care related issues of the WHO Family of International Classifications (WHO-FIC).", "title": "" }, { "paragraph_id": 2, "text": "The ICD is revised periodically and is currently in its 11th revision. The ICD-11, as it is therefore known, was accepted by WHO's World Health Assembly (WHA) on 25 May 2019 and officially came into effect on 1 January 2022. On 11 February 2022, the WHO stated that 35 countries were using the ICD-11.", "title": "" }, { "paragraph_id": 3, "text": "The ICD is part of a \"family\" of international classifications (WHOFIC) that complement each other, also including the International Classification of Functioning, Disability and Health (ICF) which focuses on the domains of functioning (disability) associated with health conditions, from both medical and social perspectives, and the International Classification of Health Interventions (ICHI) that classifies the whole range of medical, nursing, functioning and public health interventions.", "title": "" }, { "paragraph_id": 4, "text": "The title of the ICD is formally the International Statistical Classification of Diseases and Related Health Problems, although the original title, International Classification of Diseases, is still informally the name by which it is usually known.", "title": "" }, { "paragraph_id": 5, "text": "In the United States and some other countries, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is preferred for the classification of mental disorders for some purposes.", "title": "" }, { "paragraph_id": 6, "text": "In 1860, during the international statistical congress held in London, Florence Nightingale made a proposal that was to result in the development of the first model of systematic collection of hospital data. In 1893, a French physician, Jacques Bertillon, introduced the Bertillon Classification of Causes of Death at a congress of the International Statistical Institute in Chicago.", "title": "Historical synopsis" }, { "paragraph_id": 7, "text": "A number of countries adopted Bertillon's system, which was based on the principle of distinguishing between general diseases and those localized to a particular organ or anatomical site, as used by the City of Paris for classifying deaths. Subsequent revisions represented a synthesis of English, German, and Swiss classifications, expanding from the original 44 titles to 161 titles. In 1898, the American Public Health Association (APHA) recommended that the registrars of Canada, Mexico, and the United States also adopt it. The APHA also recommended revising the system every 10 years to ensure the system remained current with medical practice advances. As a result, the first international conference to revise the International Classification of Causes of Death took place in 1900, with revisions occurring every ten years thereafter. At that time, the classification system was contained in one book, which included an Alphabetic Index as well as a Tabular List. The book was small compared with current coding texts.", "title": "Historical synopsis" }, { "paragraph_id": 8, "text": "The revisions that followed contained minor changes, until the sixth revision of the classification system. With the sixth revision, the classification system expanded to two volumes. The sixth revision included morbidity and mortality conditions, and its title was modified to reflect the changes: International Statistical Classification of Diseases, Injuries and Causes of Death (ICD). Prior to the sixth revision, responsibility for ICD revisions fell to the Mixed Commission, a group composed of representatives from the International Statistical Institute and the Health Organization of the League of Nations. In 1948, the WHO assumed responsibility for preparing and publishing the revisions to the ICD every ten years. WHO sponsored the seventh and eighth revisions in 1957 and 1968, respectively. It later became clear that the established ten year interval between revisions was too short.", "title": "Historical synopsis" }, { "paragraph_id": 9, "text": "The ICD is currently the most widely used statistical classification system for diseases in the world. In addition, some countries—including Australia, Canada, and the United States—have developed their own adaptations of ICD, with more procedure codes for classification of operative or diagnostic procedures.", "title": "Historical synopsis" }, { "paragraph_id": 10, "text": "The ICD-6, published in 1949, was the first to be shaped to become suitable for morbidity reporting. Accordingly, the name changed from International List of Causes of Death to International Statistical Classification of Diseases. The combined code section for injuries and their associated accidents was split into two, a chapter for injuries, and a chapter for their external causes. With use for morbidity there was a need for coding mental conditions, and for the first time a section on mental disorders was added .", "title": "Versions of ICD" }, { "paragraph_id": 11, "text": "The International Conference for the Seventh Revision of the International Classification of Diseases was held in Paris under the auspices of WHO in February 1955. In accordance with a recommendation of the WHO Expert Committee on Health Statistics, this revision was limited to essential changes and amendments of errors and inconsistencies.", "title": "Versions of ICD" }, { "paragraph_id": 12, "text": "The 8th Revision Conference convened by WHO met in Geneva, from 6 to 12 July 1965. This revision was more radical than the Seventh but left unchanged the basic structure of the Classification and the general philosophy of classifying diseases, whenever possible, according to their etiology rather than a particular manifestation. During the years that the Seventh and Eighth Revisions of the ICD were in force, the use of the ICD for indexing hospital medical records increased rapidly and some countries prepared national adaptations which provided the additional detail needed for this application of the ICD.", "title": "Versions of ICD" }, { "paragraph_id": 13, "text": "In the US, a group of consultants was asked to study the ICD-8 for its applicability to various users in the United States. This group recommended that further detail be provided for coding hospital and morbidity data. The American Hospital Association's \"Advisory Committee to the Central Office on ICDA\" developed the needed adaptation proposals, resulting in the publication of the International Classification of Diseases, Adapted (ICDA). In 1968, the United States Public Health Service published the International Classification of Diseases, Adapted, 8th Revision for use in the United States (ICDA-8). Beginning in 1968, ICDA-8 served as the basis for coding diagnostic data for both official morbidity and mortality statistics in the United States.", "title": "Versions of ICD" }, { "paragraph_id": 14, "text": "The International Conference for the Ninth Revision of the International Statistical Classification of Diseases, Injuries, and Causes of Death, convened by WHO, met in Geneva from 30 September to 6 October 1975. In the discussions leading up to the conference, it had originally been intended that there should be little change other than updating of the classification. This was mainly because of the expense of adapting data processing systems each time the classification was revised.", "title": "Versions of ICD" }, { "paragraph_id": 15, "text": "There had been an enormous growth of interest in the ICD and ways had to be found of responding to this, partly by modifying the classification itself and partly by introducing special coding provisions. A number of representations were made by specialist bodies which had become interested in using the ICD for their own statistics. Some subject areas in the classification were regarded as inappropriately arranged and there was considerable pressure for more detail and for adaptation of the classification to make it more relevant for the evaluation of medical care, by classifying conditions to the chapters concerned with the part of the body affected rather than to those dealing with the underlying generalized disease.", "title": "Versions of ICD" }, { "paragraph_id": 16, "text": "At the other end of the scale, there were representations from countries and areas where a detailed and sophisticated classification was irrelevant, but which nevertheless needed a classification based on the ICD in order to assess their progress in health care and in the control of disease. A field test with a bi-axial classification approach—one axis (criterion) for anatomy, with another for etiology—showed the impracticability of such approach for routine use.", "title": "Versions of ICD" }, { "paragraph_id": 17, "text": "The final proposals presented to and accepted by the Conference in 1978 retained the basic structure of the ICD, although with much additional detail at the level of the four digit subcategories, and some optional five digit subdivisions. For the benefit of users not requiring such detail, care was taken to ensure that the categories at the three digit level were appropriate.", "title": "Versions of ICD" }, { "paragraph_id": 18, "text": "As the World Health Organization explains: \"For the benefit of users wishing to produce statistics and indexes oriented towards medical care, the 9th Revision included an optional alternative method of classifying diagnostic statements, including information about both an underlying general disease and a manifestation in a particular organ or site. This system became known as the 'dagger and asterisk system' and is retained in the Tenth Revision. A number of other technical innovations were included in the Ninth Revision, aimed at increasing its flexibility for use in a variety of situations.\"", "title": "Versions of ICD" }, { "paragraph_id": 19, "text": "It was eventually replaced by ICD-10, the version currently in use by the WHO and most countries. Given the widespread expansion in the tenth revision, it is not possible to convert ICD-9 data sets directly into ICD-10 data sets, although some tools are available to help guide users. Publication of ICD-9 without IP restrictions in a world with evolving electronic data systems led to a range of products based on ICD-9, such as MeDRA or the Read directory.", "title": "Versions of ICD" }, { "paragraph_id": 20, "text": "When ICD-9 was published by the World Health Organization (WHO), the International Classification of Procedures in Medicine (ICPM) was also developed (1975) and published (1978). The ICPM surgical procedures fascicle was originally created by the United States, based on its adaptations of ICD (called ICDA), which had contained a procedure classification since 1962. ICPM is published separately from the ICD disease classification as a series of supplementary documents called fascicles (bundles or groups of items). Each fascicle contains a classification of modes of laboratory, radiology, surgery, therapy, and other diagnostic procedures. Many countries have adapted and translated the ICPM in parts or as a whole and are using it with amendments since then.", "title": "Versions of ICD" }, { "paragraph_id": 21, "text": "The International Classification of Diseases, Clinical Modification (ICD-9-CM) was an adaptation created by the US National Center for Health Statistics (NCHS) and used in assigning diagnostic and procedure codes associated with inpatient, outpatient, and physician office utilization in the United States. The ICD-9-CM is based on the ICD-9 but provides for additional morbidity detail. It was updated annually on October 1.", "title": "Versions of ICD" }, { "paragraph_id": 22, "text": "It consists three volumes:", "title": "Versions of ICD" }, { "paragraph_id": 23, "text": "The NCHS and the Centers for Medicare and Medicaid Services are the US governmental agencies responsible for overseeing all changes and modifications to the ICD-9-CM.", "title": "Versions of ICD" }, { "paragraph_id": 24, "text": "Work on ICD-10 began in 1983, and the new revision was endorsed by the Forty-third World Health Assembly in May 1990. The latest version came into use in WHO Member States starting in 1994. The classification system allows more than 55,000 different codes and permits tracking of many new diagnoses and procedures, a significant expansion on the 17,000 codes available in ICD-9. Adoption was relatively swift in most of the world. Several materials are made available online by WHO to facilitate its use, including a manual, training guidelines, a browser, and files for download. Some countries have adapted the international standard, such as the \"ICD-10-AM\" published in Australia in 1998 (also used in New Zealand), and the \"ICD-10-CA\" introduced in Canada in 2000.", "title": "Versions of ICD" }, { "paragraph_id": 25, "text": "Adoption of ICD-10-CM was slow in the United States. Since 1979, the US had required ICD-9-CM codes for Medicare and Medicaid claims, and most of the rest of the American medical industry followed suit. On 1 January 1999 the ICD-10 (without clinical extensions) was adopted for reporting mortality, but ICD-9-CM was still used for morbidity. Meanwhile, NCHS received permission from the WHO to create a clinical modification of the ICD-10, and has production of all these systems:", "title": "Versions of ICD" }, { "paragraph_id": 26, "text": "On 21 August 2008, the US Department of Health and Human Services (HHS) proposed new code sets to be used for reporting diagnoses and procedures on health care transactions. Under the proposal, the ICD-9-CM code sets would be replaced with the ICD-10-CM code sets, effective 1 October 2013. On 17 April 2012 the Department of Health and Human Services (HHS) published a proposed rule that would delay, from 1 October 2013 to 1 October 2014, the compliance date for the ICD-10-CM and PCS. Once again, Congress delayed implementation date to 1 October 2015, after it was inserted into \"Doc Fix\" Bill without debate over objections of many.", "title": "Versions of ICD" }, { "paragraph_id": 27, "text": "Revisions to ICD-10-CM Include:", "title": "Versions of ICD" }, { "paragraph_id": 28, "text": "ICD-10-CA is a clinical modification of ICD-10 developed by the Canadian Institute for Health Information for morbidity classification in Canada. ICD-10-CA applies beyond acute hospital care, and includes conditions and situations that are not diseases but represent risk factors to health, such as occupational and environmental factors, lifestyle and psycho-social circumstances.", "title": "Versions of ICD" }, { "paragraph_id": 29, "text": "The eleventh revision of the International Classification of Diseases, or the ICD-11, is almost five times as big as the ICD-10. It was created following a decade of development involving over 300 specialists from 55 countries. Following an alpha version in May 2011 and a beta draft in May 2012, a stable version of the ICD-11 was released on 18 June 2018, and officially endorsed by all WHO members during the 72nd World Health Assembly on 25 May 2019.", "title": "Versions of ICD" }, { "paragraph_id": 30, "text": "For the ICD-11, the WHO decided to differentiate between the core of the system and its derived specialty versions, such as the ICD-O for oncology. As such, the collection of all ICD entities is called the Foundation Component. From this common core, subsets can be derived. The primary derivative of the Foundation is called the ICD-11 MMS, and it is this system that is commonly referred to and recognized as \"the ICD-11\". MMS stands for Mortality and Morbidity Statistics.", "title": "Versions of ICD" }, { "paragraph_id": 31, "text": "ICD-11 comes with an implementation package that includes transition tables from and to ICD-10, a translation tool, a coding tool, web-services, a manual, training material, and more. All tools are accessible after self-registration from the Maintenance Platform.", "title": "Versions of ICD" }, { "paragraph_id": 32, "text": "The ICD-11 officially came into effect on 1 January 2022, although the WHO admitted that \"not many countries are likely to adapt that quickly\". In the United States, the advisory body of the Secretary of Health and Human Services has given an expected release year of 2025, but if a clinical modification is determined to be needed (similar to the ICD-10-CM), this could become 2027.", "title": "Versions of ICD" }, { "paragraph_id": 33, "text": "In the United States, the US Public Health Service published The International Classification of Diseases, Adapted for Indexing of Hospital Records and Operation Classification (ICDA), completed in 1962 and expanding the ICD-7 in a number of areas to more completely meet the indexing needs of hospitals. The US Public Health Service later published the Eighth Revision, International Classification of Diseases, Adapted for Use in the United States, commonly referred to as ICDA-8, for official national morbidity and mortality statistics. This was followed by the ICD, 9th Revision, Clinical Modification, known as ICD-9-CM, published by the US Department of Health and Human Services and used by hospitals and other healthcare facilities to better describe the clinical picture of the patient. The diagnosis component of ICD-9-CM is completely consistent with ICD-9 codes, and remains the data standard for reporting morbidity. National adaptations of the ICD-10 progressed to incorporate both clinical code (ICD-10-CM) and procedure code (ICD-10-PCS) with the revisions completed in 2003. In 2009, the US Centers for Medicare and Medicaid Services announced that it would begin using ICD-10 on April 1, 2010, with full compliance by all involved parties by 2013. However, the US extended the deadline twice and did not formally require transitioning to ICD-10-CM (for most clinical encounters) until October 1, 2015.", "title": "Usage in the United States" }, { "paragraph_id": 34, "text": "The years for which causes of death in the United States have been classified by each revision as follows:", "title": "Usage in the United States" }, { "paragraph_id": 35, "text": "Cause of death on United States death certificates, statistically compiled by the Centers for Disease Control and Prevention (CDC), are coded in the ICD, which does not include codes for human and system factors commonly called medical errors.", "title": "Usage in the United States" }, { "paragraph_id": 36, "text": "The various ICD editions include sections that classify mental and behavioural disorders. The ICD-10 Classification of Mental and Behavioural Disorders: Clinical Descriptions and Diagnostic Guidelines – also known as the \"blue book\" – is derived from Chapter V of ICD-10 and gives the diagnostic criteria for the conditions listed at each category therein. The blue book was developed separately to, but coexists with, the Diagnostic and Statistical Manual of Mental Disorders (DSM) of the American Psychiatric Association—though both seek to use the same diagnostic classifications. A survey of psychiatrists in 66 countries comparing use of the ICD-10 and DSM-IV found that the former was more often used for clinical diagnosis while the latter was more valued for research.", "title": "Mental health conditions" }, { "paragraph_id": 37, "text": "As part of the development of the ICD-11, WHO established an \"International Advisory Group\" to guide what would become the chapter on \"Mental, behavioural or neurodevelopmental disorders\". The working group proposed that ICD-11 should declassify the categories within ICD-10 at \"F66 Psychological and behavioural disorders that are associated with sexual development and orientation\". The group reported to WHO that there was \"no evidence\" these classifications were clinically useful, as they do not \"contribute to health service delivery or treatment selection nor provide essential information for public health surveillance.\" Adding that; despite ICD-10 explicitly stating \"sexual orientation by itself is not to be considered a disorder\", the inclusion of such categories \"suggest that mental disorders exist that are uniquely linked to sexual orientation and gender expression.\" A position already recognised by the DSM, as well as other classification systems.", "title": "Mental health conditions" }, { "paragraph_id": 38, "text": "The ICD is actually the official system for the US, although many mental health professionals do not realize this due to the dominance of the DSM.", "title": "Mental health conditions" }, { "paragraph_id": 39, "text": "A psychologist has stated: \"Serious problems with the clinical utility of both the ICD and the DSM are widely acknowledged.\"", "title": "Mental health conditions" }, { "paragraph_id": 40, "text": "Note: Since adoption of ICD-10 CM in the US, several online tools have been mushrooming. They all refer to that particular modification and thus are not linked here.", "title": "External links" } ]
The International Classification of Diseases (ICD) is a globally used medical classification used in epidemiology, health management and for clinical purposes. The ICD is maintained by the World Health Organization (WHO), which is the directing and coordinating authority for health within the United Nations System. The ICD is originally designed as a health care classification system, providing a system of diagnostic codes for classifying diseases, including nuanced classifications of a wide variety of signs, symptoms, abnormal findings, complaints, social circumstances, and external causes of injury or disease. This system is designed to map health conditions to corresponding generic categories together with specific variations, assigning for these a designated code, up to six characters long. Thus, major categories are designed to include a set of similar diseases. The ICD is published by the WHO and used worldwide for morbidity and mortality statistics, reimbursement systems, and automated decision support in health care. This system is designed to promote international comparability in the collection, processing, classification, and presentation of these statistics. The ICD is a major project to statistically classify all health disorders, and provide diagnostic assistance. The ICD is a core statistically based classificatory diagnostic system for health care related issues of the WHO Family of International Classifications (WHO-FIC). The ICD is revised periodically and is currently in its 11th revision. The ICD-11, as it is therefore known, was accepted by WHO's World Health Assembly (WHA) on 25 May 2019 and officially came into effect on 1 January 2022. On 11 February 2022, the WHO stated that 35 countries were using the ICD-11. The ICD is part of a "family" of international classifications (WHOFIC) that complement each other, also including the International Classification of Functioning, Disability and Health (ICF) which focuses on the domains of functioning (disability) associated with health conditions, from both medical and social perspectives, and the International Classification of Health Interventions (ICHI) that classifies the whole range of medical, nursing, functioning and public health interventions. The title of the ICD is formally the International Statistical Classification of Diseases and Related Health Problems, although the original title, International Classification of Diseases, is still informally the name by which it is usually known. In the United States and some other countries, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is preferred for the classification of mental disorders for some purposes.
2001-12-30T17:09:16Z
2023-12-27T21:12:24Z
[ "Template:Webarchive", "Template:Health informatics", "Template:More citations needed section", "Template:Expand section", "Template:Cite journal", "Template:Cite report", "Template:Cite book", "Template:Official website", "Template:Medical classification", "Template:Authority control", "Template:See also", "Template:Div col end", "Template:Cite news", "Template:Cite press release", "Template:Div col", "Template:Reflist", "Template:Cite web", "Template:Citation needed", "Template:Main", "Template:Rn", "Template:Portal", "Template:Wikidata property", "Template:Short description", "Template:Redirect", "Template:Anchor" ]
https://en.wikipedia.org/wiki/International_Classification_of_Diseases
15,462
Integral domain
In mathematics, specifically abstract algebra, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if a ≠ 0, an equality ab = ac implies b = c. "Integral domain" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a multiplicative identity, generally denoted 1, but some authors do not follow this, by not requiring integral domains to have a multiplicative identity. Noncommutative integral domains are sometimes admitted. This article, however, follows the much more usual convention of reserving the term "integral domain" for the commutative case and using "domain" for the general case including noncommutative rings. Some sources, notably Lang, use the term entire ring for integral domain. Some specific kinds of integral domains are given with the following chain of class inclusions: An integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Equivalently: The following rings are not integral domains. In this section, R is an integral domain. Given elements a and b of R, one says that a divides b, or that a is a divisor of b, or that b is a multiple of a, if there exists an element x in R such that ax = b. The units of R are the elements that divide 1; these are precisely the invertible elements in R. Units divide all other elements. If a divides b and b divides a, then a and b are associated elements or associates. Equivalently, a and b are associates if a = ub for some unit u. An irreducible element is a nonzero non-unit that cannot be written as a product of two non-units. A nonzero non-unit p is a prime element if, whenever p divides a product ab, then p divides a or p divides b. Equivalently, an element p is prime if and only if the principal ideal (p) is a nonzero prime ideal. Both notions of irreducible elements and prime elements generalize the ordinary definition of prime numbers in the ring Z , {\displaystyle \mathbb {Z} ,} if one considers as prime the negative primes. Every prime element is irreducible. The converse is not true in general: for example, in the quadratic integer ring Z [ − 5 ] {\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]} the element 3 is irreducible (if it factored nontrivially, the factors would each have to have norm 3, but there are no norm 3 elements since a 2 + 5 b 2 = 3 {\displaystyle a^{2}+5b^{2}=3} has no integer solutions), but not prime (since 3 divides ( 2 + − 5 ) ( 2 − − 5 ) {\displaystyle \left(2+{\sqrt {-5}}\right)\left(2-{\sqrt {-5}}\right)} without dividing either factor). In a unique factorization domain (or more generally, a GCD domain), an irreducible element is a prime element. While unique factorization does not hold in Z [ − 5 ] {\displaystyle \mathbb {Z} \left[{\sqrt {-5}}\right]} , there is unique factorization of ideals. See Lasker–Noether theorem. The field of fractions K of an integral domain R is the set of fractions a/b with a and b in R and b ≠ 0 modulo an appropriate equivalence relation, equipped with the usual addition and multiplication operations. It is "the smallest field containing R" in the sense that there is an injective ring homomorphism R → K such that any injective ring homomorphism from R to a field factors through K. The field of fractions of the ring of integers Z {\displaystyle \mathbb {Z} } is the field of rational numbers Q . {\displaystyle \mathbb {Q} .} The field of fractions of a field is isomorphic to the field itself. Integral domains are characterized by the condition that they are reduced (that is x = 0 implies x = 0) and irreducible (that is there is only one minimal prime ideal). The former condition ensures that the nilradical of the ring is zero, so that the intersection of all the ring's minimal primes is zero. The latter condition is that the ring have only one minimal prime. It follows that the unique minimal prime ideal of a reduced and irreducible ring is the zero ideal, so such rings are integral domains. The converse is clear: an integral domain has no nonzero nilpotent elements, and the zero ideal is the unique minimal prime ideal. This translates, in algebraic geometry, into the fact that the coordinate ring of an affine algebraic set is an integral domain if and only if the algebraic set is an algebraic variety. More generally, a commutative ring is an integral domain if and only if its spectrum is an integral affine scheme. The characteristic of an integral domain is either 0 or a prime number. If R is an integral domain of prime characteristic p, then the Frobenius endomorphism x ↦ x is injective.
[ { "paragraph_id": 0, "text": "In mathematics, specifically abstract algebra, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if a ≠ 0, an equality ab = ac implies b = c.", "title": "" }, { "paragraph_id": 1, "text": "\"Integral domain\" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a multiplicative identity, generally denoted 1, but some authors do not follow this, by not requiring integral domains to have a multiplicative identity. Noncommutative integral domains are sometimes admitted. This article, however, follows the much more usual convention of reserving the term \"integral domain\" for the commutative case and using \"domain\" for the general case including noncommutative rings.", "title": "" }, { "paragraph_id": 2, "text": "Some sources, notably Lang, use the term entire ring for integral domain.", "title": "" }, { "paragraph_id": 3, "text": "Some specific kinds of integral domains are given with the following chain of class inclusions:", "title": "" }, { "paragraph_id": 4, "text": "An integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Equivalently:", "title": "Definition" }, { "paragraph_id": 5, "text": "The following rings are not integral domains.", "title": "Non-examples" }, { "paragraph_id": 6, "text": "In this section, R is an integral domain.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 7, "text": "Given elements a and b of R, one says that a divides b, or that a is a divisor of b, or that b is a multiple of a, if there exists an element x in R such that ax = b.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 8, "text": "The units of R are the elements that divide 1; these are precisely the invertible elements in R. Units divide all other elements.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 9, "text": "If a divides b and b divides a, then a and b are associated elements or associates. Equivalently, a and b are associates if a = ub for some unit u.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 10, "text": "An irreducible element is a nonzero non-unit that cannot be written as a product of two non-units.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 11, "text": "A nonzero non-unit p is a prime element if, whenever p divides a product ab, then p divides a or p divides b. Equivalently, an element p is prime if and only if the principal ideal (p) is a nonzero prime ideal.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 12, "text": "Both notions of irreducible elements and prime elements generalize the ordinary definition of prime numbers in the ring Z , {\\displaystyle \\mathbb {Z} ,} if one considers as prime the negative primes.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 13, "text": "Every prime element is irreducible. The converse is not true in general: for example, in the quadratic integer ring Z [ − 5 ] {\\displaystyle \\mathbb {Z} \\left[{\\sqrt {-5}}\\right]} the element 3 is irreducible (if it factored nontrivially, the factors would each have to have norm 3, but there are no norm 3 elements since a 2 + 5 b 2 = 3 {\\displaystyle a^{2}+5b^{2}=3} has no integer solutions), but not prime (since 3 divides ( 2 + − 5 ) ( 2 − − 5 ) {\\displaystyle \\left(2+{\\sqrt {-5}}\\right)\\left(2-{\\sqrt {-5}}\\right)} without dividing either factor). In a unique factorization domain (or more generally, a GCD domain), an irreducible element is a prime element.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 14, "text": "While unique factorization does not hold in Z [ − 5 ] {\\displaystyle \\mathbb {Z} \\left[{\\sqrt {-5}}\\right]} , there is unique factorization of ideals. See Lasker–Noether theorem.", "title": "Divisibility, prime elements, and irreducible elements" }, { "paragraph_id": 15, "text": "The field of fractions K of an integral domain R is the set of fractions a/b with a and b in R and b ≠ 0 modulo an appropriate equivalence relation, equipped with the usual addition and multiplication operations. It is \"the smallest field containing R\" in the sense that there is an injective ring homomorphism R → K such that any injective ring homomorphism from R to a field factors through K. The field of fractions of the ring of integers Z {\\displaystyle \\mathbb {Z} } is the field of rational numbers Q . {\\displaystyle \\mathbb {Q} .} The field of fractions of a field is isomorphic to the field itself.", "title": "Field of fractions" }, { "paragraph_id": 16, "text": "Integral domains are characterized by the condition that they are reduced (that is x = 0 implies x = 0) and irreducible (that is there is only one minimal prime ideal). The former condition ensures that the nilradical of the ring is zero, so that the intersection of all the ring's minimal primes is zero. The latter condition is that the ring have only one minimal prime. It follows that the unique minimal prime ideal of a reduced and irreducible ring is the zero ideal, so such rings are integral domains. The converse is clear: an integral domain has no nonzero nilpotent elements, and the zero ideal is the unique minimal prime ideal.", "title": "Algebraic geometry" }, { "paragraph_id": 17, "text": "This translates, in algebraic geometry, into the fact that the coordinate ring of an affine algebraic set is an integral domain if and only if the algebraic set is an algebraic variety.", "title": "Algebraic geometry" }, { "paragraph_id": 18, "text": "More generally, a commutative ring is an integral domain if and only if its spectrum is an integral affine scheme.", "title": "Algebraic geometry" }, { "paragraph_id": 19, "text": "The characteristic of an integral domain is either 0 or a prime number.", "title": "Characteristic and homomorphisms" }, { "paragraph_id": 20, "text": "If R is an integral domain of prime characteristic p, then the Frobenius endomorphism x ↦ x is injective.", "title": "Characteristic and homomorphisms" } ]
In mathematics, specifically abstract algebra, an integral domain is a nonzero commutative ring in which the product of any two nonzero elements is nonzero. Integral domains are generalizations of the ring of integers and provide a natural setting for studying divisibility. In an integral domain, every nonzero element a has the cancellation property, that is, if a ≠ 0, an equality ab = ac implies b = c. "Integral domain" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a multiplicative identity, generally denoted 1, but some authors do not follow this, by not requiring integral domains to have a multiplicative identity. Noncommutative integral domains are sometimes admitted. This article, however, follows the much more usual convention of reserving the term "integral domain" for the commutative case and using "domain" for the general case including noncommutative rings. Some sources, notably Lang, use the term entire ring for integral domain. Some specific kinds of integral domains are given with the following chain of class inclusions:
2001-12-30T19:06:35Z
2023-11-26T09:06:34Z
[ "Template:Reflist", "Template:Sfn whitelist", "Template:Short description", "Template:Sfn", "Template:Math", "Template:See also", "Template:Efn", "Template:Distinguish", "Template:Ring theory sidebar", "Template:Algebraic structures", "Template:Lang Algebra", "Template:Cite book", "Template:Citation", "Template:Refend", "Template:Commutative ring classes", "Template:Main", "Template:Wikibooks", "Template:Notelist", "Template:Refbegin", "Template:Use American English", "Template:Nowrap", "Template:Cite Q", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Integral_domain
15,466
Infundibulum
An infundibulum (Latin for funnel; plural, infundibula) is a funnel-shaped cavity or organ.
[ { "paragraph_id": 0, "text": "An infundibulum (Latin for funnel; plural, infundibula) is a funnel-shaped cavity or organ.", "title": "" }, { "paragraph_id": 1, "text": "", "title": "See also" } ]
An infundibulum is a funnel-shaped cavity or organ.
2020-11-21T02:12:38Z
[ "Template:TOC right", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Infundibulum
15,467
Interrupt latency
In computing, interrupt latency refers to the delay between the start of an Interrupt Request (IRQ) and the start of the respective Interrupt Service Routine (ISR). For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be affected by microprocessor design, interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods. There is usually a trade-off between interrupt latency, throughput, and processor utilization. Many of the techniques of CPU and OS design that improve interrupt latency will decrease throughput and increase processor utilization. Techniques that increase throughput may increase interrupt latency and increase processor utilization. Lastly, trying to reduce processor utilization may increase interrupt latency and decrease throughput. Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. They can also affect the jitter in the interrupt latency, which can drastically affect the real-time schedulability of the system. The Intel APIC architecture is well known for producing a huge amount of interrupt latency jitter. Maximum interrupt latency is largely determined by the methods an OS uses for interrupt handling. For example, most processors allow programs to disable interrupts, putting off the execution of interrupt handlers, in order to protect critical sections of code. During the execution of such a critical section, all interrupt handlers that cannot execute safely within a critical section are blocked (they save the minimum amount of information required to restart the interrupt handler after all critical sections have exited). So the interrupt latency for a blocked interrupt is extended to the end of the critical section, plus any interrupts with equal and higher priority that arrived while the block was in place. Many computer systems require low interrupt latencies, especially embedded systems that need to control machinery in real-time. Sometimes these systems use a real-time operating system (RTOS). An RTOS makes the promise that no more than a specified maximum amount of time will pass between executions of subroutines. In order to do this, the RTOS must also guarantee that interrupt latency will never exceed a predefined maximum. Advanced interrupt controllers implement a multitude of hardware features in order to minimize the overhead during context switches and the effective interrupt latency. These include features like: Also, there are many other methods hardware may use to help lower the requirements for shorter interrupt latency in order to make a given interrupt latency tolerable in a situation. These include buffers, and flow control. For example, most network cards implement transmit and receive ring buffers, interrupt rate limiting, and hardware flow control. Buffers allow data to be stored until it can be transferred, and flow control allows the network card to pause communications without having to discard data if the buffer is full. Modern hardware also implements interrupt rate limiting. This helps prevent interrupt storms or live-locks by having the hardware wait a programmable minimum amount of time between each interrupt it generates. Interrupt rate limiting reduces the amount of time spent servicing interrupts, allowing the processor to spend more time doing useful work. Exceeding this time results in a soft (recoverable) or hard (non-recoverable) error.
[ { "paragraph_id": 0, "text": "In computing, interrupt latency refers to the delay between the start of an Interrupt Request (IRQ) and the start of the respective Interrupt Service Routine (ISR). For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be affected by microprocessor design, interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods.", "title": "" }, { "paragraph_id": 1, "text": "There is usually a trade-off between interrupt latency, throughput, and processor utilization. Many of the techniques of CPU and OS design that improve interrupt latency will decrease throughput and increase processor utilization. Techniques that increase throughput may increase interrupt latency and increase processor utilization. Lastly, trying to reduce processor utilization may increase interrupt latency and decrease throughput.", "title": "Background" }, { "paragraph_id": 2, "text": "Minimum interrupt latency is largely determined by the interrupt controller circuit and its configuration. They can also affect the jitter in the interrupt latency, which can drastically affect the real-time schedulability of the system. The Intel APIC architecture is well known for producing a huge amount of interrupt latency jitter.", "title": "Background" }, { "paragraph_id": 3, "text": "Maximum interrupt latency is largely determined by the methods an OS uses for interrupt handling. For example, most processors allow programs to disable interrupts, putting off the execution of interrupt handlers, in order to protect critical sections of code. During the execution of such a critical section, all interrupt handlers that cannot execute safely within a critical section are blocked (they save the minimum amount of information required to restart the interrupt handler after all critical sections have exited). So the interrupt latency for a blocked interrupt is extended to the end of the critical section, plus any interrupts with equal and higher priority that arrived while the block was in place.", "title": "Background" }, { "paragraph_id": 4, "text": "Many computer systems require low interrupt latencies, especially embedded systems that need to control machinery in real-time. Sometimes these systems use a real-time operating system (RTOS). An RTOS makes the promise that no more than a specified maximum amount of time will pass between executions of subroutines. In order to do this, the RTOS must also guarantee that interrupt latency will never exceed a predefined maximum.", "title": "Background" }, { "paragraph_id": 5, "text": "Advanced interrupt controllers implement a multitude of hardware features in order to minimize the overhead during context switches and the effective interrupt latency. These include features like:", "title": "Considerations" }, { "paragraph_id": 6, "text": "Also, there are many other methods hardware may use to help lower the requirements for shorter interrupt latency in order to make a given interrupt latency tolerable in a situation. These include buffers, and flow control. For example, most network cards implement transmit and receive ring buffers, interrupt rate limiting, and hardware flow control. Buffers allow data to be stored until it can be transferred, and flow control allows the network card to pause communications without having to discard data if the buffer is full.", "title": "Considerations" }, { "paragraph_id": 7, "text": "Modern hardware also implements interrupt rate limiting. This helps prevent interrupt storms or live-locks by having the hardware wait a programmable minimum amount of time between each interrupt it generates. Interrupt rate limiting reduces the amount of time spent servicing interrupts, allowing the processor to spend more time doing useful work. Exceeding this time results in a soft (recoverable) or hard (non-recoverable) error.", "title": "Considerations" } ]
In computing, interrupt latency refers to the delay between the start of an Interrupt Request (IRQ) and the start of the respective Interrupt Service Routine (ISR). For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be affected by microprocessor design, interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods.
2023-02-23T17:36:32Z
[ "Template:See also", "Template:More citations needed", "Template:Use dmy dates", "Template:Citation needed", "Template:Reflist", "Template:Cite journal" ]
https://en.wikipedia.org/wiki/Interrupt_latency
15,468
İskender kebap
İskender kebap is a Turkish dish that consists of sliced döner kebab meat topped with hot tomato sauce over pieces of pita bread, and generously slathered with melted special sheep's milk butter and yogurt. It can be prepared from thinly and carefully cut grilled lamb or chicken. Tomato sauce and melted butter are generally poured over the dish live at the table, for the customer's amusement. It is one of the most popular dishes of Turkey. It takes its name from its inventor, İskender Efendi, who lived in Bursa in the late 19th century Ottoman Empire. "Kebapçı İskender" is trademarked by the İskenderoğlu family, who still run the restaurant in Bursa. This dish is available in many restaurants throughout the country mostly under the name "İskender kebap", "Bursa kebabı", or at times with an alternative one made up by the serving restaurant such as "Uludağ kebabı".
[ { "paragraph_id": 0, "text": "İskender kebap is a Turkish dish that consists of sliced döner kebab meat topped with hot tomato sauce over pieces of pita bread, and generously slathered with melted special sheep's milk butter and yogurt. It can be prepared from thinly and carefully cut grilled lamb or chicken. Tomato sauce and melted butter are generally poured over the dish live at the table, for the customer's amusement.", "title": "" }, { "paragraph_id": 1, "text": "It is one of the most popular dishes of Turkey. It takes its name from its inventor, İskender Efendi, who lived in Bursa in the late 19th century Ottoman Empire.", "title": "" }, { "paragraph_id": 2, "text": "\"Kebapçı İskender\" is trademarked by the İskenderoğlu family, who still run the restaurant in Bursa. This dish is available in many restaurants throughout the country mostly under the name \"İskender kebap\", \"Bursa kebabı\", or at times with an alternative one made up by the serving restaurant such as \"Uludağ kebabı\".", "title": "" } ]
İskender kebap is a Turkish dish that consists of sliced döner kebab meat topped with hot tomato sauce over pieces of pita bread, and generously slathered with melted special sheep's milk butter and yogurt. It can be prepared from thinly and carefully cut grilled lamb or chicken. Tomato sauce and melted butter are generally poured over the dish live at the table, for the customer's amusement. It is one of the most popular dishes of Turkey. It takes its name from its inventor, İskender Efendi, who lived in Bursa in the late 19th century Ottoman Empire. "Kebapçı İskender" is trademarked by the İskenderoğlu family, who still run the restaurant in Bursa. This dish is available in many restaurants throughout the country mostly under the name "İskender kebap", "Bursa kebabı", or at times with an alternative one made up by the serving restaurant such as "Uludağ kebabı".
2002-02-25T15:51:15Z
2023-08-19T02:03:42Z
[ "Template:Cite web", "Template:In lang", "Template:Cuisine of Turkey", "Template:Short description", "Template:Infobox food", "Template:Portal", "Template:Use dmy dates", "Template:More citations needed", "Template:Reflist" ]
https://en.wikipedia.org/wiki/%C4%B0skender_kebap
15,471
LGBT people and Islam
Within the Muslim world, sentiment towards LGBT people varies and has varied between societies and individual Muslims, but is contemporarily quite negative. While colloquial, and in many cases, de facto official acceptance of at least some homosexual behavior was commonplace in pre-modern periods, later developments, starting from the 19th-century, have created a generally hostile environment for LGBT people. Most Muslim-majority countries have opposed moves to advance LGBT rights and recognition at the United Nations (UN), including within the UN General Assembly and the UN Human Rights Council. Meanwhile, contemporary Islamic jurisprudence generally accepts the possibility for transgender people (mukhannith/mutarajjilah) to change their gender status, but only after surgery, linking one's gender to biological markers. Trans people are nonetheless confronted with stigma, discrimination, intimidation, and harassment in many Muslim majority societies. Transgender identities are often considered under the gender-binary, although some pre-modern scholars had recognized effeminate men as a form of third gender, as long as their behaviour was naturally in contrast to their assigned gender at birth. There are differences between how the Qur'an and later hadith traditions (orally transmitted collections of Muhammad's teachings) treat homosexuality, with many Western scholars arguing that the latter is far more explicitly negative. Using these differences, these scholars have argued that Muhammad, the main Islamic prophet, never forbade homosexual relationships outright, although he disapproved of them in line with his contemporaries. There is, however, comparatively little evidence of homosexual practices being prevalent in Muslim societies for the first century and a half of Islamic history; male homosexual relationships were known of and discriminated against in Arabia, but were generally not met with legal sanctions. In later pre-modern periods, historical evidence of homosexual relationships are more common; and show de facto tolerance of these relationships. Historical records suggest that laws against homosexuality were invoked infrequently — mainly in cases of rape or other "exceptionally blatant infringement on public morals" as defined by Islamic law. This allowed themes of homoeroticism and pederasty to be cultivated in Islamic poetry and other Islamic literary genres, written in major languages of the Muslim world, from the 8th century CE into the modern era. The conceptions of homosexuality found in these texts resembled the traditions of ancient Greece and ancient Rome as opposed to the modern understanding of sexual orientation. In the modern era, Muslim public attitudes towards homosexuality underwent a marked change beginning in the 19th century, largely due to the global spread of Islamic fundamentalist movements, namely Salafism and Wahhabism. The Muslim world was also influenced by the sexual notions and restrictive norms that were prevalent in the Christian world at the time, particularly with regard to anti-homosexual legislation throughout European societies, most of which adhered to Christian law. As such, a number of Muslim-majority countries that were once colonies of European empires retain the criminal penalties that were originally implemented by European colonial authorities against those who were convicted of engaging in non-heterosexual acts. Therefore, modern Muslim homophobia is generally not thought to be a direct continuation of pre-modern mores, but a phenomenon that has been shaped by a variety of local and imported frameworks. As Western culture eventually moved towards secularism and thus enabled a platform for the flourishing of many LGBT movements, many Muslim fundamentalists came to associate the Western world with "ravaging moral decay" and rampant homosexuality. In contemporary society, prejudice, anti-LGBT discrimination and/or anti-LGBT violence — including within legal systems — persist in much of the Muslim world, exacerbated by socially conservative attitudes and the recent rise of Islamist ideologies in some countries; there are laws in place against homosexual activities in a larger number of Muslim-majority countries, with a number of them prescribing the death penalty for convicted offenders. Societies in the Islamic world have recognized "both erotic attraction and sexual behavior between members of the same sex". Attitudes varied; legal scholars condemned and often formulated punishments for homosexual acts, yet lenient (or often non-existent) enforcement allowed for toleration, and sometimes "celebration" of such acts. Homoeroticism was idealized in the form of poetry or artistic declarations of love, often from an older man to a younger man or adolescent boy. Accordingly, the Arabic language had an appreciable vocabulary of homoerotic terms, with dozens of words just to describe types of male prostitutes. Schmitt (1992) identifies some twenty words in Arabic, Persian, and Turkish to identify those who are penetrated. Other related Arabic words includes mukhannathun, ma'bûn, halaqī, and baghghā. There is little evidence of homosexual practice in Islamic societies for the first century and a half of the Islamic era. Homoerotic poetry appears suddenly at the end of the 8th century CE, particularly in Baghdad in the work of Abu Nuwas (756–814), who became a master of all the contemporary genres of Arabic poetry. The famous author Jahiz tried to explain the abrupt change in attitudes toward homosexuality after the Abbasid Revolution by the arrival of the Abbasid army from Khurasan, who are said to have consoled themselves with male pages when they were forbidden to take their wives with them. The increased prosperity following the early conquests was accompanied by a "corruption of morals" in the two holy cities of Mecca and Medina, and it can be inferred that homosexual practice became more widespread during this time as a result of acculturation to foreign customs, such as the music and dance practiced by mukhannathun, who were mostly foreign in origin. The Abbasid caliph Al-Amin (r. 809–813) was said to have required slave women to be dressed in masculine clothing so he could be persuaded to have sex with them, and a broader fashion for ghulamiyyat (boy-like girls) is reflected in literature of the period. The same was said of Andalusian ruler al-Hakam II (r. 961–976). The conceptions of homosexuality found in classical Islamic texts resemble the traditions of classical Greece and those of ancient Rome, rather than the modern understanding of sexual orientation. It was expected that many mature men would be sexually attracted to both women and adolescent boys (with different views about the appropriate age range for the latter), and such men were expected to wish to play only an active role in homosexual intercourse once they reached adulthood. However, any confident assessment of the actual incidence of homosexual behavior remains elusive. Preference for homosexual over heterosexual relations was regarded as a matter of personal taste rather than a marker of homosexual identity in a modern sense. While playing an active role in homosexual relations carried no social stigma beyond that of licentious behavior, seeking to play a passive role was considered both unnatural and shameful for a mature man. Following Greek precedents, the Islamic medical tradition only regarded this latter case as pathological, and showed no concern for other forms of homosexual behavior. As evident from an eleventh-century discussion among the scholars of Baghdad, some scholars who showed traits of bisexuality argued, that it is natural for a man to desire anal intercourse with a fellow man, but this would be only allowed in the afterlife. The medieval Islamic concept of homoerotic relationships was distinct from modern concept of homosexuality, and related to the pederasty of Ancient Greece. During the early period, growth of a beard was considered to be the conventional age when an adolescent lost his homoerotic appeal, as evidenced by poetic protestations that the author still found his lover beautiful despite the growing beard. During later periods, the age of the stereotypical beloved became more ambiguous, and this prototype was often represented in Persian poetry by Turkic slave-soldiers. This trend is illustrated by the story of Mahmud of Ghazni (971–1030), the ruler of the Ghaznavid Empire, and his cupbearer Malik Ayaz. Their relationship started when Malik was a slave boy: "At the time of the coins’ minting, Mahmud of Ghazni was in a passionate romantic relationship with his male slave Malik Ayaz, and had exalted him to various positions of power across the Ghazanid Empire. While the story of their love affair had been censored until recently — the result of Western colonialism and changing attitudes towards homosexuality in the Middle East — Jasmine explains how Ghazni's subjects saw their relationship as a higher form of love." Other famous examples of homosexuality include the Aghlabid Emir Ibrahim II of Ifriqiya (ruled 875–902), who was said to have been surrounded by some sixty catamites, yet whom he was said to have treated in a most horrific manner. Caliph al-Mutasim in the 9th century and some of his successors were accused of homosexuality. The Christian martyr Pelagius of Córdoba was executed by Andalusian ruler Abd al-Rahman III because the boy refused his advances. The 14th-century Iranian poet Obeid Zakani, in his scores of satirical stories and poems, ridiculed the contradiction between the strict legalistic prohibitions of homosexuality on the one hand and its common practice on the other. Following is an example from his Ressaleh Delgosha: “Two old men, who used to exchange sex since their very childhood, were making love on the top of a mosque’s minaret in the holy city of Qom. When both finished their turns, one told the other: “shameless practices have ruined our city.” The other man nodded and said, “You and I are the city’s blessed seniors, what then do you expect from others?” European sources state that Mehmed the Conqueror, an Ottoman sultan from the 15th century, "was known to have ambivalent sexual tastes, sent a eunuch to the house of Notaras, demanding that he supply his good-looking fourteen-year-old son for the Sultan's pleasure. When he refused, the Sultan instantly ordered the decapitation of Notaras, together with that of his son and his son-in-law; and their three heads … were placed on the banqueting table before him". Another youth Mehmed found attractive, and who was presumably more accommodating, was Radu III the Fair, the brother of Vlad the Impaler: "Radu, a hostage in Istanbul whose good looks had caught the Sultan's fancy, and who was thus singled out to serve as one of his most favored pages." After the defeat of Vlad, Mehmed placed Radu on the throne of Wallachia as a vassal ruler. However, some Turkish sources deny these stories. According to the Encyclopedia of Islam and the Muslim World: Whatever the legal strictures on sexual activity, the positive expression of male homoerotic sentiment in literature was accepted, and assiduously cultivated, from the late eighth century until modern times. First in Arabic, but later also in Persian, Turkish and Urdu, love poetry by men about boys more than competed with that about women, it overwhelmed it. Anecdotal literature reinforces this impression of general societal acceptance of the public celebration of male-male love (which hostile Western caricatures of Islamic societies in medieval and early modern times simply exaggerate). European travellers remarked on the taste that Shah Abbas of Iran (1588–1629) had for wine and festivities, but also for attractive pages and cup-bearers. A painting by Riza Abbasi with homo-erotic qualities shows the ruler enjoying such delights. According to Daniel Eisenberg, "Homosexuality was a key symbolic issue throughout the Middle Ages in [Islamic] Iberia. As was customary everywhere until the nineteenth century, homosexuality was not viewed as a congenital disposition or 'identity'; the focus was on nonprocreative sexual practices, of which sodomy was the most controversial." For example, in al-Andalus "homosexual pleasures were much indulged by the intellectual and political elite. Evidence includes the behavior of rulers . . . who kept male harems." Although early Islamic writings such as the Quran expressed a mildly negative attitude towards homosexuality, laypersons usually apprehended the idea with indifference, if not admiration. Few literary works displayed hostility towards non-heterosexuality, apart from partisan statements and debates about types of love (which also occurred in heterosexual contexts). Khaled el-Rouayheb (2014) maintains that "much if not most of the extant love poetry of the period [16th to 18th century] is pederastic in tone, portraying an adult male poet's passionate love for a teenage boy". In mystic writings of the medieval era, such as Sufi texts, it is "unclear whether the beloved being addressed is a teenage boy or God." European chroniclers censured "the indulgent attitudes to gay sex in the Caliphs' courts." El-Rouayheb suggests that even though religious scholars considered sodomy as an abhorrent sin, most of them did not genuinely believe that it was illicit to merely fall in love with a boy or express this love via poetry. In secular society however, a male's desire to penetrate a desirable youth was seen as understandable, even if unlawful. On the other hand, men adopting the passive role were more subjected to stigma. The medical term ubnah qualified the pathological desire of a male to exclusively be on the receiving end of anal intercourse. Physicians that theorized on ubnah includes Rhazes, who thought that it was correlated with small genitals and that a treatment was possible provided that the subject was deemed to be not too effeminate and the behavior not "prolonged". Dawud al-Antaki advanced that it could have been caused by an acidic substance embedded in the veins of the anus, causing itchiness and thus the need to seek relief. The 18th and 19th centuries saw the rise of Islamic fundamentalism such as Wahhabism, which came to call for stricter adherence to the Hadith. In 1744, Muhammad bin Saud, the tribal ruler of the town of Diriyah, endorsed ibn Abd al-Wahhab’s mission and the two swore an oath to establish a state together run according to true Islamic principles. For the next seventy years, until the dismantlement of the first state in 1818, the Wahhabis dominated from Damascus to Baghdad. Homosexuality, which had been largely tolerated in the Ottoman Empire, also became criminalized, and those found guilty were thrown to their deaths from the top of the minarets. Homosexuality in the Ottoman Empire was decriminalized in 1858, as part of wider reforms during the Tanzimat. However, authors Lapidus and Salaymeh write that before the 19th century Ottoman society had been open and welcoming to homosexuals, and that by the 1850s via European influence they began censoring homosexuality in their society. In Iran, several hundred political opponents were executed in the aftermath of the 1979 Islamic Revolution and justified it by accusing them of homosexuality. Homosexual intercourse became a capital offense in Iran's Islamic Penal Code in 1991. Though the grounds for execution in Iran are difficult to track, there is evidence that several people were hanged for homosexual behavior in 2005–2006 and in 2016, mostly in cases of dubious charges of rape. In some countries like Iran and Iraq the dominant discourse is that Western imperialism has spread homosexuality. In Egypt, though homosexuality is not explicitly criminalized, it has been widely prosecuted under vaguely formulated "morality" laws. Under the current rule of Abdel Fattah el-Sisi, arrests of LGBT individuals have risen fivefold, apparently reflecting an effort to appeal to conservatives. In Uzbekistan, an anti-sodomy law, passed after World War II with the goal of increasing the birth rate, was invoked in 2004 against a gay rights activist, who was imprisoned and subjected to extreme abuse. In Iraq, where homosexuality is legal, the breakdown of law and order following the Second Gulf War allowed Islamist militias and vigilantes to act on their prejudice against gays, with ISIS gaining particular notoriety for the gruesome acts of anti-LGBT violence committed under its rule of parts of Syria and Iraq. Scott Siraj al-Haqq Kugle has argued that while Muslims "commemorate the early days of Islam when they were oppressed as a marginalized few," many of them now forget their history and fail to protect "Muslims who are gay, transgender and lesbian." According to Georg Klauda, in the 19th and early 20th century, homosexual sexual contact was viewed as relatively commonplace in parts of the Middle East, owing in part to widespread sex segregation, which made heterosexual encounters outside marriage more difficult. Klauda states that "Countless writers and artists such as André Gide, Oscar Wilde, Edward M. Forster, and Jean Genet made pilgrimages in the 19th and 20th centuries from homophobic Europe to Algeria, Morocco, Egypt, and various other Arab countries, where homosexual sex was not only met without any discrimination or subcultural ghettoization whatsoever, but rather, additionally as a result of rigid segregation of the sexes, seemed to be available on every corner." Views about homosexuality have never been universal all across the Islamic world. With reference to the Muslim world more broadly, Tilo Beckers writes that "Besides the endogenous changes in the interpretation of scriptures having a deliberalizing influence that came from within Islamic cultures, the rejection of homosexuality in Islam gained momentum through the exogenous effects of European colonialism, that is, the import of Western cultural understandings of homosexuality as a perversion." University of Münster professor Thomas Bauer points that even though there were many orders of stoning for homosexuality, there is not a single proven case of it being carried out. Bauer continues that "Although contemporary Islamist movements decry homosexuality as a form of Western decadence, the current prejudice against it among Muslim publics stems from an amalgamation of traditional Islamic legal theory with popular notions that were imported from Europe during the colonial era, when Western military and economic superiority made Western notions of sexuality particularly influential in the Muslim world." In some Muslim-majority countries, current anti-LGBT laws were enacted by United Kingdom or Soviet organs and retained following independence. The 1860 Indian Penal Code, which included an anti-sodomy statute, was used as a basis of penal laws in other parts of the empire. However, as Dynes and Donaldson point out, North African countries under French colonial tutelage lacked anti-homosexual laws which were only born afterwards, with the full weight of Islamic opinion descending on those who, on the model of the gay liberationists of the West, would seek to make "homosexuality" (above all, adult men taking passive roles) publicly respectable. Jordan, Bahrain, and - more recently - India, a country with a substantial Muslim minority, have abolished the criminal penalties for consensual homosexual acts introduced under colonial rule. Persecution of homosexuals has been exacerbated in recent decades by a rise in Islamic fundamentalism and the emergence of the gay-rights movement in the West, which allowed Islamists to paint homosexuality as a noxious Western import. The Quran contains several allusions to homosexual activity, which has prompted considerable exegetical and legal commentaries over the centuries. The subject is most clearly addressed in the story of Sodom and Gomorrah (seven verses) after the men of the city demand to have sex with the (seemingly male) messengers sent by God to the prophet Lot (or Lut). The Quranic narrative largely conforms to that found in Genesis. In one passage the Quran says that the men "solicited his guests of him" (Quran 54:37), using an expression that parallels phrasing used to describe the attempted seduction of Joseph, and in multiple passages they are accused of "coming with lust" to men instead of women (or their wives). The Quran terms this lewdness or fahisha (Arabic: فاحشة, romanized: fāḥiša) unprecedented in the history of the world: And ˹remember˺ when Lot scolded ˹the men of˺ his people, ˹saying,˺ “Do you commit a shameful deed that no man has ever done before? You lust after men instead of women! You are certainly transgressors.” But his people’s only response was to say, “Expel them from your land! They are a people who wish to remain chaste!” So We saved him and his family except his wife, who was one of the doomed. We poured upon them a rain ˹of brimstone˺. See what was the end of the wicked! The destruction of the "people of Lut" is thought to be explicitly associated with their sexual practices. Later exegetical literature built on these verses as writers attempted to give their own views as to what went on; and there was general agreement among exegetes that the "lewdness" alluded to by the Quranic passages was attempted sodomy (specifically anal intercourse) between men. Some Muslim scholars, such as the Ẓāhirī scholar (literalist) ibn Ḥazm, argue that the "people of Lut" were destroyed not because of participation in homosexuality per se, but because of disregarding Prophets and messengers and attempting to rape one of them. The sins of the "people of Lut" (Arabic: لوط) subsequently became proverbial and the Arabic words for the act of anal sex between men such as liwat (Arabic: لواط, romanized: liwāṭ) and for a person who performs such acts (Arabic: لوطي, romanized: lūṭi) both derive from his name, although Lut was not the one demanding sex. Some Western and Modern Islamic scholars argue that in the course of the Quranic Lot story, homosexuality in the modern sense is not addressed, but that the destruction of the "people of Lut" was a result of breaking the ancient hospitality law and sexual violence, in this case they attempted rape of men. Only one passage in the Quran prescribes a strictly legal position. It is not restricted to homosexual behaviour, however, and deals more generally with zina (illicit sexual intercourse): ˹As for˺ those of your women who commit illegal intercourse—call four witnesses from among yourselves. If they testify, confine the offenders to their homes until they die or Allah ordains a ˹different˺ way for them. And the two among you who commit this sin—discipline them. If they repent and mend their ways, relieve them. Surely Allah is ever Accepting of Repentance, Most Merciful. In the exegetical Islamic literature, this verse has provided the basis for the view that Muhammad took a lenient approach towards male homosexual practices. The Orientalist scholar Pinhas Ben Nahum has argued that "it is obvious that the Prophet viewed the vice with philosophic indifference. Not only is the punishment not indicated—it was probably some public reproach or insult of a slight nature—but mere penitence sufficed to escape the punishment". Most exegetes hold that these verses refer to illicit heterosexual relationships, although a minority view attributed to the Mu'tazilite scholar Abu Muslim al-Isfahani interpreted them as referring to homosexual relations. This view was widely rejected by medieval scholars, but has found some acceptance in modern times. Some Quranic verses describing the Islamic paradise refer to perpetually youthful attendants which inhabit it, and they are described as both male and female servants: the females are referred to as ḥūr, whereas the males are referred to as ghilmān, wildān, and suqāh. The slave boys are referred to in the Quran as "immortal boys" (56:17, 76:19) or "young men" (52:24) who serve wine and meals to the blessed. Although the tafsir literature does not interpret this as a homoerotic allusion, the connection was made in other literary genres, mostly humorously. For example, the Abbasid-era poet Abu Nuwas wrote: A beautiful lad came carrying the wine With smooth hands and fingers dyed with henna And with long hair of golden curls around his cheeks ... I have a lad who is like the beautiful lads of paradise And his eyes are big and beautiful Jurists of the Hanafi school took up the question seriously, considering, but ultimately rejecting the suggestion that homosexual pleasures were, like wine, forbidden in this world but enjoyed in the afterlife. Ibn 'Âbidîn's Hâshiya refers to a debate among the scholars of Baghdad in the eleventh century, that some scholars argued in favor of that analogy. This was opposed by those who found anal intercourse repulsive. The hadith (sayings and actions attributed to Muhammad) show that homosexual behaviour was not unknown in seventh-century Arabia. However, given that the Quran did not specify the punishment of homosexual practices, Islamic jurists increasingly turned to several "more explicit" hadiths in an attempt to find guidance on appropriate punishment. From Abu Musa al-Ash'ari, the Prophet states that: "If a woman comes upon a woman, they are both adulteresses, if a man comes upon a man, then they are both adulterers." While there are no reports relating to homosexuality in the best known and authentic hadith collections of Sahih al-Bukhari and Sahih Muslim, other canonical collections record a number of condemnations of the "act of the people of Lut" (male-to-male anal intercourse). For example, Abu 'Isa Muhammad ibn 'Isa at-Tirmidhi (compiling the Sunan al-Tirmidhi around 884) wrote that Muhammad had indeed prescribed the death penalty for both the active and passive partners: Narrated by Abdullah ibn Abbas: "The Prophet said: 'If you find anyone doing as Lot's people did, kill the one who does it, and the one to whom it is done'." Narrated Abdullah ibn Abbas: "If a man who is not married is seized committing sodomy he will be stoned to death." Ibn al-Jawzi (1114–1200), writing in the 12th century, claimed that Muhammad had cursed "sodomites" in several hadith, and had recommended the death penalty for both the active and passive partners in homosexual acts. It was narrated that Ibn Abbas said: "The Prophet said: '... cursed is the one who does the action of the people of Lot'." Ahmad narrated from Ibn Abbas that the Prophet of Allah said: 'May Allah curse the one who does the action of the people of Lot, may Allah curse the one who does the action of the people of Lot', three times." Al-Nuwayri (1272–1332), writing in the 13th century, reported in his Nihaya that Muhammad is "alleged to have said what he feared most for his community were the practices of the people of Lot (he seems to have expressed the same idea in regard to wine and female seduction)." It was narrated that Jabir: "The Prophet said: 'There is nothing I fear for my followers more than the deed of the people of Lot.'" According to Oliver Leaman, other hadiths seem to permit homoerotic feelings as long as they are not translated into action. However, in one hadith attributed to Muhammad himself, which exists in multiple variants, the Islamic prophet acknowledged homoerotic temptation towards young boys and warned his Companions against it: "Do not gaze at the beardless youths, for verily they have eyes more tempting than the houris" or "... for verily they resemble the houris". These beardless youths are also described as wearing sumptuous robes and having perfumed hair. Consequently, Islamic religious leaders, skeptical of Muslim men's capacity of self-control over their sexual urges, have forbidden looking and yearning both at males and females. In addition, there is a number of "purported (but mutually inconsistent) reports" (athar) of punishments of sodomy ordered by some of the early caliphs. Abu Bakr apparently recommended toppling a wall on the culprit, or else burning him alive, while Ali ibn Abi Talib is said to have ordered death by stoning for one sodomite and had another thrown head-first from the top of the highest building in the town; according to Ibn Abbas, the latter punishment must be followed by stoning. There are, however, fewer hadith mentioning homosexual behaviour in women; but punishment (if any) for lesbianism was not clarified. In Classical Arabic and Islamic literature, the plural term mukhannathun (singular: mukhannath) was a term used to describe gender-variant people, and it has typically referred to effeminate men or people with ambiguous sexual characteristics, who appeared feminine and functioned sexually or socially in roles typically carried out by women. According to the Iranian scholar Mehrdad Alipour, "in the premodern period, Muslim societies were aware of five manifestations of gender ambiguity: This can be seen through figures such as the khasi (eunuch), the hijra, the mukhannath, the mamsuh and the khuntha (hermaphrodite/intersex)." Gender specialists Aisya Aymanee M. Zaharin and Maria Pallotta-Chiarolli give the following explanation of the meaning of the term mukhannath and its derivate Arabic forms in the hadith literature: Various academics such as Alipour (2017) and Rowson (1991) point to references in the Hadith to the existence of mukhannath: a man who carries femininity in his movements, in his appearance, and in the softness of his voice. The Arabic term for a trans woman is mukhannith as they want to change their sex characteristics, while mukhannath presumably do not/have not. The mukhannath or effeminate man is obviously male, but naturally behaves like a female, unlike the khuntha, an intersex person, who could be either male or female. Ironically, while there is no obvious mention of mukhannath, mukhannith, or khuntha in the Qur’ān, this holy book clearly recognizes that there are some people, who are neither male nor female, or are in between, and/or could also be “non-procreative” [عَقِيم] (Surah 42 Ash-Shuraa, verse 49–50). Moreover, within Islam, there is a tradition of the elaboration and refinement of extended religious doctrines through scholarship. This doctrine contains a passage by the scholar and hadith collector An-Nawawi: A mukhannath is the one ("male") who carries in his movements, in his appearance and in his language the characteristics of a woman. There are two types; the first is the one in whom these characteristics are innate, he did not put them on by himself, and therein is no guilt, no blame and no shame, as long as he does not perform any (illicit) act or exploit it for money (prostitution etc.). The second type acts like a woman out of immoral purposes and he is the sinner and blameworthy. The hadith collection of Bukhari (compiled in the 9th century from earlier oral traditions) includes a report regarding mukhannathun, effeminate men who were granted access to secluded women's quarters and engaged in other non-normative gender behavior: This hadiths attributed to Muhammad's wives, a mukhannath in question expressed his appreciation of a woman's body and described it for the benefit of another man. According to one hadith, this incident was prompted by a mukhannath servant of Muhammad's wife Umm Salama commenting upon the body of a woman and following that, Muhammad cursed the mukhannathun and their female equivalents, mutarajjilat and ordered his followers to remove them from their homes. Aisha says: A mukhannath used to enter upon the wives of Prophet. They (the people) counted him among those who were free of physical needs. One day the Prophet entered upon us when he was with one of his wives, and was describing the qualities of a woman, saying: When she comes forward, she comes forward with four (folds of her stomach), and when she goes backward, she goes backward with eight (folds of her stomach). The Prophet said: Do I not see that this one knows what here lies. Then they (the wives) observed veil from him. Narrated by Abdullah ibn Abbas: The Prophet cursed effeminate men; those men who are in the similitude (assume the manners of women) and those women who assume the manners of men, and he said, "Turn them out of your houses." The Prophet turned out such-and-such man, and 'Umar turned out such-and-such woman. Early Islamic literature rarely comments upon the habits of the mukhannathun. It seems there may have been some variance in how "effeminate" they were, though there are indications that some adopted aspects of feminine dress or at least ornamentation. One hadith states that a Muslim mukhannath who had dyed his hands and feet with henna (traditionally a feminine activity) was banished from Medina, but not killed for his behavior. A mukhannath who had dyed his hands and feet with henna was brought to the Prophet. He asked: What is the matter with this man? He was told: Apostle of Allah! he affects women's get-up. So he ordered regarding him and he was banished to an-Naqi'. The people said: Apostle of Allah! should we not kill him? He said: I have been prohibited from killing people who pray. AbuUsamah said: Naqi' is a region near Medina and not a Baqi. Other hadiths also mention the punishment of banishment, both in connection with Umm Salama's servant and a man who worked as a musician. Muhammad described the musician as a mukhannath and threatened to banish him if he did not end his unacceptable career. According to Everett K. Rowson, professor of Middle Eastern and Islamic Studies at New York University, none of the sources state that Muhammad banished more than two mukhannathun, and it is not clear to what extent the action was taken because of their breaking of gender rules in itself or because of the "perceived damage to social institutions from their activities as matchmakers and their corresponding access to women". The scarcity of concrete prescriptions from hadith and the contradictory nature of information about the actions of early authorities resulted in the lack of agreement among classical jurists as to how homosexual activity should be treated. Classical Islamic jurists did not deal with homosexuality as a sexual orientation, since the latter concept is modern and has no equivalent in traditional law, which dealt with it under the technical terms of liwat and zina. Broadly, traditional Islamic law took the view that homosexual activity could not be legally sanctioned because it takes place outside religiously recognised marriages. All major schools of law consider liwat (anal sex) as a punishable offence. Most legal schools treat homosexual intercourse with penetration similarly to unlawful heterosexual intercourse under the rubric of zina, but there are differences of opinion with respect to methods of punishment. Some legal schools "prescribed capital punishment for sodomy, but others opted only for a relatively mild discretionary punishment." The Hanbalites are the most severe among Sunni schools, insisting on capital punishment for anal sex in all cases, while the other schools generally restrict punishment to flagellation with or without banishment, unless the culprit is muhsan (Muslim free married adult), and Hanafis often suggest no physical punishment at all, leaving the choice to the judge's discretion. The founder of the Hanafi school Abu Hanifa refused to recognize the analogy between sodomy and zina, although his two principal students disagreed with him on this point. The Hanafi scholar Abu Bakr Al-Jassas (d. 981 AD/370 AH) argued that the two hadiths on killing homosexuals "are not reliable by any means and no legal punishment can be prescribed based on them". Where capital punishment is prescribed and a particular method is recommended, the methods range from stoning (Hanbali, Maliki), to the sword (some Hanbalites and Shafi'ites), or leaving it to the court to choose between several methods, including throwing the culprit off a high building (Shi'ite). For unclear reasons, the treatment of homosexuality in Twelver Shi'ism jurisprudence is generally harsher than in Sunni fiqh, while Zaydi and Isma'ili Shia jurists took positions similar to the Sunnis. Where flogging is prescribed, there is a tendency for indulgence and some recommend that the prescribed penalty should not be applied in full, with Ibn Hazm reducing the number of strokes to 10. There was debate as to whether the active and passive partners in anal sex should be punished equally. Beyond penetrative anal sex, there was "general agreement" that "other homosexual acts (including any between females) were lesser offenses, subject only to discretionary punishment." Some jurists viewed sexual intercourse as possible only for an individual who possesses a phallus; hence those definitions of sexual intercourse that rely on the entry of as little of the corona of the phallus into a partner's orifice. Since women do not possess a phallus and cannot have intercourse with one another, they are, in this interpretation, physically incapable of committing zinā. Since a hadd punishment for zina requires testimony from four witnesses of the actual act of penetration or a confession from the accused repeated four times, the legal criteria for the prescribed harsh punishments of homosexual acts were very difficult to fulfill. The debates of classical jurists are "to a large extent theoretical, since homosexual relations have always been tolerated" in pre-modern Islamic societies. While it is difficult to determine to what extent the legal sanctions were enforced in different times and places, historical record suggests that the laws were invoked mainly in cases of rape or other "exceptionally blatant infringement on public morals". Documented instances of prosecution for homosexual acts are rare, and those which followed legal procedure prescribed by Islamic law are even rarer. In Kecia Ali's book, she cites that "contemporary scholars disagree sharply about the Qur'anic perspective on same-sex intimacy." One scholar represents the conventional perspective by arguing that the Qur'an "is very explicit in its condemnation of homosexuality leaving scarcely any loophole for a theological accommodation of homosexuality in Islam." Another scholar argues that "the Qur'an does not address homosexuality or homosexuals explicitly." Overall, Ali says that "there is no one Muslim perspective on anything." Many Muslim scholars have followed a "don't ask, don't tell" policy in regards to homosexuality in Islam, by treating the subject with passivity. Mohamed El-Moctar El-Shinqiti, director of the Islamic Center of South Plains in Texas, has argued that "[even though] homosexuality is a grievous sin...[a] no legal punishment is stated in the Qur'an for homosexuality...[b] it is not reported that Prophet Muhammad has punished somebody for committing homosexuality...[c] there is no authentic hadith reported from the Prophet prescribing a punishment for the homosexuals..." Classical hadith scholars such as Al-Bukhari, Yahya ibn Ma'in, Al-Nasa'i, Ibn Hazm, Al-Tirmidhi, and others have disputed the authenticity of hadith reporting these statements. Egyptian Islamist journalist Muhammad Jalal Kishk also found no punishment for homosexual acts prescribed in the Quran, regarding the hadith that mentioned it as poorly attested. He did not approve of such acts, but believed that Muslims who abstained from sodomy would be rewarded by sex with youthful boys in paradise. Faisal Kutty, a professor of Islamic law at Indiana-based Valparaiso University Law School and Toronto-based Osgoode Hall Law School, commented on the contemporary same-sex marriage debate in a 27 March 2014 essay in the Huffington Post. He acknowledged that while Islamic law iterations prohibit pre- and extra-marital as well as same-sex sexual activity, it does not attempt to "regulate feelings, emotions and urges, but only its translation into action that authorities had declared unlawful". Kutty, who teaches comparative law and legal reasoning, also wrote that many Islamic scholars have "even argued that homosexual tendencies themselves were not haram [prohibited] but had to be suppressed for the public good". He claimed that this may not be "what the LGBTQ community wants to hear", but that, "it reveals that even classical Islamic jurists struggled with this issue and had a more sophisticated attitude than many contemporary Muslims". Kutty, who in the past wrote in support of allowing Islamic principles in dispute resolution, also noted that "most Muslims have no problem extending full human rights to those—even Muslims—who live together 'in sin'". He argued that it therefore seems hypocritical to deny fundamental rights to same-sex couples. Moreover, he concurred with Islamic legal scholar Mohamed Fadel in arguing that this is not about changing Islamic marriage (nikah), but about making "sure that all citizens have access to the same kinds of public benefits". Scott Siraj al-Haqq Kugle, a professor of Islamic Studies at Emory University, has argued for a different interpretation of the Lot narrative focusing not on the sexual act but on the infidelity of the tribe and their rejection of Lot's Prophethood. According to Kugle, "where the Qur'an treats same-sex acts, it condemns them only so far as they are exploitive or violent." More generally, Kugle notes that the Quran refers to four different levels of personality. One level is "genetic inheritance." The Qur'an refers to this level as one's "physical stamp" that "determines one's temperamental nature" including one's sexuality. On the basis of this reading of the Qur'an, Kugle asserts that homosexuality is "caused by divine will," so "homosexuals have no rational choice in their internal disposition to be attracted to same-sex mates." Kugle argues that if the classical commentators had seen "sexual orientation as an integral aspect of human personality," they would have read the narrative of Lot and his tribe "as addressing male rape of men in particular" and not as "addressing homosexuality in general." Kugle furthermore reads the Qur'an as holding "a positive assessment of diversity." Under this reading, Islam can be described as "a religion that positively assesses diversity in creation and in human societies," allowing gay and lesbian Muslims to view homosexuality as representing the "natural diversity in sexuality in human societies." A critique of Kugle's approach, interpretations and conclusions was published in 2016 by Mobeen Vaid. In a 2012 book, Aisha Geissinger writes that there are "apparently irreconcilable Muslim standpoints on same-sex desires and acts," all of which claim "interpretative authenticity." One of these standpoints results from "queer-friendly" interpretations of the Lot story and the Quran. The Lot story is interpreted as condemning "rape and inhospitality rather than today's consensual same-sex relationships." In their book Islamic Law and Muslim Same-Sex Unions, Junaid Jahangir and Hussein Abdullatif argue that interpretations which view the Quranic narrative of the people of Lot and the derived classical notion of liwat as applying to same-sex relationships reflect the sociocultural norms and medical knowledge of societies that produced those interpretations. They further argue that the notion of liwat is compatible with the Quranic narrative, but not with the contemporary understanding of same-sex relationships based on love and shared responsibilities. In his 2010 article Sexuality and Islam, Abdessamad Dialmy addressed "sexual norms defined by the sacred texts (Koran and Sunna)." He wrote that "sexual standards in Islam are paradoxical." The sacred texts "allow and actually are an enticement to the exercise of sexuality." However, they also "discriminate ... between heterosexuality and homosexuality." Islam's paradoxical standards result in "the current back and forth swing of sexual practices between repression and openness." Dialmy sees a solution to this back and forth swing by a "reinterpretation of repressive holy texts." According to the International Lesbian and Gay Association (ILGA) seven countries still retain capital punishment for homosexual behavior: Saudi Arabia, Yemen, Iran, Afghanistan, Mauritania, northern Nigeria, and the United Arab Emirates. Afghanistan also has the death penalty for homosexuality since the 2021 Taliban takeover. In Qatar, Algeria, Uzbekistan, and the Maldives, homosexuality is punished with time in prison or a fine. This has led to controversy regarding Qatar, which hosted the 2022 FIFA World Cup. In 2010, human rights groups questioned the awarding of hosting rights to Qatar, due to concerns that gay football fans may be jailed. In response, Sepp Blatter, head of FIFA, joked that they would have to "refrain from sexual activity" while in Qatar. He later withdrew the remarks after condemnation from rights groups. Same-sex sexual activity is illegal in Chad since 1 August 2017 under a new penal code. Before that, homosexuality between consenting adults had not been criminalized ever prior to this law. In Egypt, openly gay men have been prosecuted under general public morality laws. (See Cairo 52.) "Sexual relations between consenting adult persons of the same sex in private are not prohibited as such. However, the Law on the Combating of Prostitution, and the law against debauchery have been used to imprison gay men in recent years." An Egyptian TV host was recently sentenced to a year in prison for interviewing a gay man in January 2019. The Sunni Islamist militant group and Salafi-jihadist terrorist organization ISIL/ISIS/IS/Daesh, which invaded and claimed parts of Iraq and Syria between 2014 and 2017, enacted the political and religious persecution of LGBT people and decreed capital punishment for them. ISIL/ISIS/IS/Daesh terrorists have executed more than two dozen men and women for suspected homosexual activity, including several thrown off the top of buildings in highly publicized executions. In India, which has the third-largest Muslim population in the world, and where Islam is the largest minority religion, the largest Islamic seminary (Darul Uloom Deoband) has vehemently opposed recent government moves to abrogate and liberalize laws from the colonial era that banned homosexuality. As of September 2018, homosexuality is no longer a criminal act in India, and most of the religious groups withdrew their opposing claims against it in the Supreme Court. In Iraq, homosexuality is allowed by the government, but terrorist groups often carry out illegal executions of gay people. Saddam Hussein was "unbothered by sexual mores." Ali Hili reports that "since the 2003 invasion more than 700 people have been killed because of their sexuality." He calls Iraq the "most dangerous place in the world for sexual minorities." In Jordan, where homosexuality is legal, "gay hangouts have been raided or closed on bogus charges, such as serving alcohol illegally." Despite this legality, social attitudes towards homosexuality are still hostile and hateful. In Pakistan, its law is a mixture of both British colonial law as well as Islamic law, both which proscribe criminal penalties for same-sex sexual acts. The Pakistan Penal Code of 1860, originally developed under colonial rule, punishes sodomy with a possible prison sentence. Yet, the more likely situation for gay and bisexual men is sporadic police fines, and jail sentences. In Bangladesh, homosexual acts are illegal and punishable according to section 377. In 2009 and 2013, the Bangladeshi Parliament refused to overturn Section 377. In Saudi Arabia, the maximum punishment for homosexual acts is public execution by beheading. In Malaysia, homosexual acts are illegal and punishable with jail, fine, deportation, whipping or chemical castration. In October 2018, Prime Minister Mahathir Mohamad stated that Malaysia would not "copy" Western nations' approach towards LGBT rights, indicating that these countries were exhibiting a disregard for the institutions of the traditional family and marriage, as the value system in Malaysia is good. In May 2019, in response to the warning of George Clooney about intending to impose death penalty for homosexuals like Brunei, the Deputy Foreign Minister Marzuki Yahya pointed out that Malaysia does not kill gay people, and will not resort to killing sexual minorities. He also said, although such lifestyles deviate from Islam, the government would not impose such a punishment on the group. In Indonesia, the country do not have a sodomy law and do not currently criminalize private, non-commercial homosexual acts among consenting adults, except in the Aceh province where homosexuality is illegal for Muslims under Islamic Sharia law, and punishable by flogging. While not criminalising homosexuality, the country does not recognise same-sex marriage. In July 2015, the Minister of Religious Affairs stated that it is difficult in Indonesia to legalize Gay Marriage, because strongly held religious norms speak strongly against it. According to some jurists, there should be death stoning penalty for homosexuals. While another group consider flogging with 100 lashes is the correct punishment. In Turkey, homosexuality is legal, but "official censure can be fierce". A former interior minister, İdris Naim Şahin, called homosexuality an example of "dishonour, immorality and inhuman situations". Turkey held its 16th Gay Pride Parade in Istanbul on 30 June 2019. As the latest addition in the list of criminalizing Muslim countries, Brunei's has implemented penalty for homosexuals within Sharia Penal Code in stages since 2014. It prescribes death by stoning as punishment for sex between men, and sex between women is punishable by caning or imprisonment. The sultanate currently has a moratorium in effect on death penalty. All nations currently having capital punishment as a potential penalty for homosexual activity are Muslim-majority countries and base those laws on interpretations of Islamic teachings. In 2020, the International Lesbian, Gay, Bisexual, Trans and Intersex Association (ILGA) released its most recent State Sponsored Homophobia Report. The report found that eleven countries or regions impose the death penalty for "same-sex sexual acts" with reference to sharia-based laws. In Iran, according to article 129 and 131 there are up to 100 lashes of whip first three times and fourth time death penalty for lesbians. The death penalty is implemented nationwide in Brunei, Iran, Saudi Arabia, Afghanistan, Yemen, northern Nigeria, United Arab Emirates, Mauritania and Somalia. This punishment is also allowed by the law but not implemented in Qatar, and Pakistan; and was back then implemented through non-state courts by ISIS in parts of Iraq and Syria (now no longer existing). Due to Brunei's law dictating that gay sex be punishable by stoning, many of its targeted citizens fled to Canada in hopes of finding refuge. The law is also set to impose the same punishment for adultery among heterosexual couples. Despite pushback from citizens in the LGBTQ+ community, Brunei prime minister's office produced a statement explaining Brunei's intention for carrying through with the law. It has been suggested that this is part of a plan to separate Brunei from the western world and towards a Muslim one. In the Chechen Republic, a part of the Russian Federation, Ramzan Kadyrov has actively discriminated against homosexual individuals and presided over a campaign of arbitrary detention and extrajudicial killing. It has been suggested that "to counteract popular support for an Islamist insurgency that erupted after the Soviet breakup, President Vladimir V. Putin of Russia has granted wide latitude to [Kadyrov] to co-opt elements of the Islamist agenda, including an intolerance of gays." Reports of the discrimination in Chechnya have in turn been used to stoke Islamophobia, racist, and anti-Russia rhetoric. Jessica Stern, executive director of OutRight Action International, has criticized this bigotry, noting: “Using a violent attack on men accused of being gay to legitimize islamophobia is dangerous and misleading. It negates the experiences of queer muslims and essentializes all muslims as homophobic. We cannot permit this tragedy to be co-opted by ethno-nationalists to perpetuate anti-Muslim or anti-Russian sentiment. The people and their government are never the same.” In Algeria, Bangladesh, Chad, Morocco, Aceh, Maldives, Oman, Pakistan, Qatar, Syria, and Tunisia, it is illegal, and penalties may be imposed. In Kuwait, Turkmenistan and Uzbekistan, homosexual acts between males are illegal, but homosexual relations between females are legal. Same-sex sexual intercourse is legal in Albania, Azerbaijan, Bahrain, Bosnia and Herzegovina, Burkina Faso, Djibouti (de jure), Guinea-Bissau, Iraq (de jure), Jordan, Kazakhstan, Kosovo, Kyrgyzstan, Mali, Niger, Tajikistan, Turkey, West Bank (State of Palestine), Indonesia, and in Northern Cyprus. In Albania and Turkey, there have been discussions about legalizing same-sex marriage. Albania, Northern Cyprus, Bosnia and Herzegovina and Kosovo also protect LGBT people with anti-discrimination laws. In Lebanon, courts have ruled that the country's penal code must not be used to target homosexuals, but the law has yet to be changed by parliament. In 2007, there was a gay party in the Moroccan town of al-Qasr al-Kabir. Rumours spread that this was a gay marriage and more than 600 people took to the streets, condemning the alleged event and protesting against leniency towards homosexuals. Several persons who attended the party were detained and eventually six Moroccan men were sentenced to between four and ten months in prison for "homosexuality". In France, there was an Islamic same-sex marriage on 18 February 2012. In Paris in November 2012 a room in a Buddhist prayer hall was used by gay Muslims and called a "gay-friendly mosque", and a French Islamic website is supporting religious same-sex marriage. The French overseas department of Mayotte, which has a majority-Muslim population, legalized same-sex marriage in 2013, along with the rest of France. The first American Muslim in the United States Congress, Keith Ellison (D-MN) said in 2010 that all discrimination against LGBT people is wrong. He further expressed support for gay marriage stating: I believe that the right to marry someone who you please is so fundamental it should not be subject to popular approval any more than we should vote on whether blacks should be allowed to sit in the front of the bus. In 2014, eight men were jailed for three years by a Cairo court after the circulation of a video of them allegedly taking part in a private wedding ceremony between two men on a boat on the Nile. In the late 1980s, Mufti Muhammad Sayyid Tantawy of Egypt issued a fatwa supporting the right for those who fit the description of mukhannathun and mukhannathin to have sex reassignment surgery; Ayatollah Khomeini of Iran issued similar fatwas around the same time. Khomeini's initial fatwa concerned intersex individuals as well, but he later specified that sex reassignment surgery was also permissible in the case of transgender individuals. Because homosexuality is illegal in Iran but gender transition is legal, some gay individuals have been forced to undergo sex reassignment surgery and transition into the opposite sex, regardless of their actual gender identity. While Iran has outlawed homosexuality, Iranian thinkers such as Ayatollah Khomeini have allowed for transgender people to change their sex so that they can enter heterosexual relationships. Iran is the only Muslim-majority country in the Persian Gulf region that allows transgender people to express themselves by recognizing their self-identified gender and subsidizing reassignment surgery. Despite this, those who do not commit to reassignment surgery are not accepted to be trans.The government even provides up to half the cost for those needing financial assistance and a sex change is recognized on the birth certificate. In Pakistan, transgender people make up 0.005 percent of the total population. Previously, transgender people were isolated from society and had no legal rights or protections. They also suffered discrimination in healthcare services. For example, in 2016 a transgender individual died in a hospital while doctors were trying to decide which ward the patient should be placed in. Transgender people also faced discrimination in finding employment resulting from incorrect identity cards and incongruous legal status. Many were forced into poverty, dancing, singing, and begging on the streets to scrape by. On 26 June 2016, clerics affiliated to the Pakistan-based organization Tanzeem Ittehad-i-Ummat issued a fatwa on transgender people where a trans woman (born male) with "visible signs of being a woman" is allowed to marry a man, and a trans man (born female) with "visible signs of being a man" is allowed to marry a woman. Pakistani transgender persons can also change their (legal) sex. Muslim ritual funerals also apply. Depriving transgender people of their inheritance, humiliating, insulting or teasing them were also declared haraam. In May 2018, the Pakistani parliament passed a bill giving transgender individuals the right to choose their legal sex and correct their official documents, such as ID cards, driver licenses, and passports. Today, transgender people in Pakistan have the right to vote and to search for a job free from discrimination. As of 2018, one transgender woman became a news anchor, and two others were appointed to the Supreme Court. The Muslim community as a whole, worldwide, has become polarized on the subject of homosexuality. Some Muslims say that "no good Muslim can be gay", and "traditional schools of Islamic law consider homosexuality a grave sin". At the opposite pole, "some Muslims . . . are welcoming what they see as an opening within their communities to address anti-gay attitudes." Especially, it is "young Muslims" who are "increasingly speaking out in support of gay rights". According to the Albert Kennedy Trust, one in four young homeless people identify as LGBT due to their religious parents disowning them. The Trust suggests that the majority of individuals who are homeless due to religious out casting are either Christian or Muslim. Many young adults who come out to their parents are often forced out of the house to find refuge in a more accepting place. This leads many individual to be homeless or even attempt suicide. In 2013, the Pew Research Center conducted a study on the global acceptance of homosexuality and found a widespread rejection of homosexuality in many nations that are predominantly Muslim. In some countries, views were becoming more conservative among younger people. 2019 Arab Barometer Survey: The coming together of "human rights discourses and sexual orientation struggles" has resulted in an abundance of "social movements and organizations concerned with gender and sexual minority oppression and discrimination." Today, most LGBTQ-affirming Islamic organizations and individual congregations are primarily based in the Western world and South Asian countries; they usually identify themselves with the liberal and progressive movements within Islam. In France there was an Islamic same-sex marriage on February 18, 2012. In Paris in November 2012 a room in a Buddhist prayer hall was used by gay Muslims and called a "gay-friendly mosque", and a French Islamic website is supporting religious same-sex marriage. The Ibn Ruschd-Goethe mosque in Berlin is a liberal mosque open to all types of Muslims, where men and women pray together and LGBT worshippers are welcomed and supported. Other significant LGBT-inclusive mosques or prayer groups include the El-Tawhid Juma Circle Unity Mosque in Toronto, Masjid an-Nur al-Isslaah (Light of Reform Mosque) in Washington D.C., Masjid Al-Rabia in Chicago, Unity Mosque in Atlanta, People's Mosque in Cape Town South Africa, Masjid Ul-Umam mosque in Cape Town, Qal'bu Maryamin in California, and the Nur Ashki Jerrahi Sufi Community in New York City. Muslims for Progressive Values, based in the United States and Malaysia, is "a faith-based, grassroots, human rights organization that embodies and advocates for the traditional Qur'anic values of social justice and equality for all, for the 21st Century." The Mecca Institute is an LGBT-inclusive and progressive online Islamic seminary, and serves as an online center of Islamic learning and research. The Al-Fatiha Foundation was an organization which tried to advance the cause of gay, lesbian, and transgender Muslims. It was founded in 1998 by Faisal Alam, a Pakistani American, and was registered as a nonprofit organization in the United States. The organization was an offshoot of an internet listserve that brought together many gay, lesbian and questioning Muslims from various countries. There are a number of Islamic ex-gay organizations, that is, those composed of people claiming to have experienced a basic change in sexual orientation from exclusive homosexuality to exclusive heterosexuality. These groups, like those based in socially conservative Christianity, are aimed at attempting to guide homosexuals towards heterosexuality. One of the leading LGBT reformatory Muslim organization is StraightWay Foundation, which was established in the United Kingdom in 2004 as an organization that provides information and advice for Muslims who struggle with homosexual attraction. They believe "that through following God's guidance", one may "cease to be" gay. They teach that the male-female pair is the "basis for humanity's growth" and that homosexual acts "are forbidden by God". NARTH has written favourably of the group. In 2004, Straightway entered into a controversy with the contemporary Mayor of London, Ken Livingstone, and the controversial Islamic cleric Yusuf al-Qaradawi. It was suggested that Livingstone was giving a platform to Islamic fundamentalists, and not liberal and progressive Muslims. Straightway responded to this by sending Livingstone a letter thanking him for his support of al-Qaradawi. Livingstone then ignited controversy when he thanked Straightway for the letter. Several anti-LGBT incidents have occurred: There are numbers of Muslim LGBT activists from different parts of the world. Some of them are listed below: In 2010, an anthology Islam and Homosexuality was published. In the Forward, Parvez Sharma sounded a pessimistic note about the future: "In my lifetime I do not see Islam drafting a uniform edict that homosexuality is permissible." Following is material from two chapters dealing with the present: Rusmir Musić in a chapter "Queer Visions of Islam" said that "Queer Muslims struggle daily to reconcile their sexuality and their faith." Musić began to study in college "whether or not my love for somebody of the same gender disgusts God and whether it will propel me to hell. The answer, for me, is an unequivocal no. Furthermore, Musić wrote, "my research and reflection helped me to imagine my sexuality as a gift from a loving, not hateful, God." Marhuq Fatima Khan in a chapter "Queer, American, and Muslim: Cultivating Identities and Communities of Affirmation," says that "Queer Muslims employ a few narratives to enable them to reconcile their religious and sexual identities." They "fall into three broad categories: (1) God Is Merciful; (2) That Is Just Who I Am; and (3) It's Not Just Islam." In his 2003 book Progressive Muslims: On Justice, Gender, and Pluralism, Professor Scott Siraj al-Haqq Kugle asserts "that Islam does not address homosexuality." In Kugle's reading, the Quran holds "a positive assessment of diversity." It "respects diversity in physical appearance, constitution, stature, and color of human beings as a natural consequence of Divine wisdom in creation." Therefore, Islam can be described as "a religion that positively assesses diversity in creation and in human societies." Furthermore, in Kugle's reading, the Quran "implies that some people are different in their sexual desires than others." Thus, homosexuality can be seen as part of the "natural diversity in sexuality in human societies." This is the way "gay and lesbian Muslims" view their homosexuality. In addition to the Qur'an, Kugle refers to the benediction of Imam Al-Ghazali (the 11th-century Muslim theologian) which says "praise be to God, the marvels of whose creation are not subject to the arrows of accident." For Kugle, this benediction implies that "if sexuality is inherent in a person's personality, then sexual diversity is a part of creation, which is never accidental but is always marvelous." Kugle also refers to "a rich archive of same-sex sexual desires and expressions, written by or reported about respected members of society: literati, educated elites, and religious scholars." Given these writings, Kugle concludes that "one might consider Islamic societies (like classical Greece) to provide a vivid illustration of a 'homosexual-friendly' environment." This evoked from "medieval and early modern Christian Europeans" accusations that Muslim were "engaging openly in same-sex practices." Kugle goes a step further in his argument and asserts that "if some Muslims find it necessary to deny that sexual diversity is part of the natural created world, then the burden of proof rests on their shoulders to illustrate their denial from the Qur'anic discourse itself." Kecia Ali in her 2016 book Sexual Ethics and Islam says that "there is no one Muslim perspective on anything." Regarding the Quran, Ali says that modern scholars disagree about what it says about "same-sex intimacy." Some scholars argue that "the Qur'an does not address homosexuality or homosexuals explicitly." Regarding homosexuality, Ali says the belief that "exclusively homosexual desire is innate in some individuals" has been adopted "even among some relatively conservative Western Muslim thinkers." 100 Homosexual Muslims believe their homosexuality to be innate and view "their sexual orientation as God-given and immutable." She observes that "queer and trans people are sometimes treated as defective or deviant," and adds that it is "vital not to assume that variation implies imperfection or disability." Regarding "medieval Muslim culture," Ali says that "male desire to penetrate desirable youth ... was perfectly normal." Even if same-sex relations were not lawful, there was "an unwillingness to seek out and condemn instances of same-sex activity, but rather to let them pass by ... unpunished." Ali states that some scholars claim that Islamic societies were 'homosexual-friendly' in history. In her article "Same-sex Sexual Activity and Lesbian and Bisexual Women", Ali elaborates on homosexuality as an aspect of medieval Muslim culture. She says that "same-sex sexual expression has been a more or less recognized aspect of Muslim societies for many centuries." There are many explicit discussions of "same-sex sexual activity" in medieval Arabic literature. Ali states there is a lack of focus in medieval tradition on female same-sex sexual activity, where the Qur'an mainly focuses male/male sex. With female same-sex sexual activity there is more focus on the punishment for the acts and the complications with the dower, compared to men where there is a focus on punishment but also the need for ablutions and the effect of the act on possible marriage decisions.
[ { "paragraph_id": 0, "text": "Within the Muslim world, sentiment towards LGBT people varies and has varied between societies and individual Muslims, but is contemporarily quite negative. While colloquial, and in many cases, de facto official acceptance of at least some homosexual behavior was commonplace in pre-modern periods, later developments, starting from the 19th-century, have created a generally hostile environment for LGBT people. Most Muslim-majority countries have opposed moves to advance LGBT rights and recognition at the United Nations (UN), including within the UN General Assembly and the UN Human Rights Council.", "title": "" }, { "paragraph_id": 1, "text": "Meanwhile, contemporary Islamic jurisprudence generally accepts the possibility for transgender people (mukhannith/mutarajjilah) to change their gender status, but only after surgery, linking one's gender to biological markers. Trans people are nonetheless confronted with stigma, discrimination, intimidation, and harassment in many Muslim majority societies. Transgender identities are often considered under the gender-binary, although some pre-modern scholars had recognized effeminate men as a form of third gender, as long as their behaviour was naturally in contrast to their assigned gender at birth.", "title": "" }, { "paragraph_id": 2, "text": "There are differences between how the Qur'an and later hadith traditions (orally transmitted collections of Muhammad's teachings) treat homosexuality, with many Western scholars arguing that the latter is far more explicitly negative. Using these differences, these scholars have argued that Muhammad, the main Islamic prophet, never forbade homosexual relationships outright, although he disapproved of them in line with his contemporaries. There is, however, comparatively little evidence of homosexual practices being prevalent in Muslim societies for the first century and a half of Islamic history; male homosexual relationships were known of and discriminated against in Arabia, but were generally not met with legal sanctions. In later pre-modern periods, historical evidence of homosexual relationships are more common; and show de facto tolerance of these relationships. Historical records suggest that laws against homosexuality were invoked infrequently — mainly in cases of rape or other \"exceptionally blatant infringement on public morals\" as defined by Islamic law. This allowed themes of homoeroticism and pederasty to be cultivated in Islamic poetry and other Islamic literary genres, written in major languages of the Muslim world, from the 8th century CE into the modern era. The conceptions of homosexuality found in these texts resembled the traditions of ancient Greece and ancient Rome as opposed to the modern understanding of sexual orientation.", "title": "" }, { "paragraph_id": 3, "text": "In the modern era, Muslim public attitudes towards homosexuality underwent a marked change beginning in the 19th century, largely due to the global spread of Islamic fundamentalist movements, namely Salafism and Wahhabism. The Muslim world was also influenced by the sexual notions and restrictive norms that were prevalent in the Christian world at the time, particularly with regard to anti-homosexual legislation throughout European societies, most of which adhered to Christian law. As such, a number of Muslim-majority countries that were once colonies of European empires retain the criminal penalties that were originally implemented by European colonial authorities against those who were convicted of engaging in non-heterosexual acts. Therefore, modern Muslim homophobia is generally not thought to be a direct continuation of pre-modern mores, but a phenomenon that has been shaped by a variety of local and imported frameworks. As Western culture eventually moved towards secularism and thus enabled a platform for the flourishing of many LGBT movements, many Muslim fundamentalists came to associate the Western world with \"ravaging moral decay\" and rampant homosexuality. In contemporary society, prejudice, anti-LGBT discrimination and/or anti-LGBT violence — including within legal systems — persist in much of the Muslim world, exacerbated by socially conservative attitudes and the recent rise of Islamist ideologies in some countries; there are laws in place against homosexual activities in a larger number of Muslim-majority countries, with a number of them prescribing the death penalty for convicted offenders.", "title": "" }, { "paragraph_id": 4, "text": "Societies in the Islamic world have recognized \"both erotic attraction and sexual behavior between members of the same sex\". Attitudes varied; legal scholars condemned and often formulated punishments for homosexual acts, yet lenient (or often non-existent) enforcement allowed for toleration, and sometimes \"celebration\" of such acts. Homoeroticism was idealized in the form of poetry or artistic declarations of love, often from an older man to a younger man or adolescent boy. Accordingly, the Arabic language had an appreciable vocabulary of homoerotic terms, with dozens of words just to describe types of male prostitutes. Schmitt (1992) identifies some twenty words in Arabic, Persian, and Turkish to identify those who are penetrated. Other related Arabic words includes mukhannathun, ma'bûn, halaqī, and baghghā.", "title": "History" }, { "paragraph_id": 5, "text": "There is little evidence of homosexual practice in Islamic societies for the first century and a half of the Islamic era. Homoerotic poetry appears suddenly at the end of the 8th century CE, particularly in Baghdad in the work of Abu Nuwas (756–814), who became a master of all the contemporary genres of Arabic poetry. The famous author Jahiz tried to explain the abrupt change in attitudes toward homosexuality after the Abbasid Revolution by the arrival of the Abbasid army from Khurasan, who are said to have consoled themselves with male pages when they were forbidden to take their wives with them. The increased prosperity following the early conquests was accompanied by a \"corruption of morals\" in the two holy cities of Mecca and Medina, and it can be inferred that homosexual practice became more widespread during this time as a result of acculturation to foreign customs, such as the music and dance practiced by mukhannathun, who were mostly foreign in origin. The Abbasid caliph Al-Amin (r. 809–813) was said to have required slave women to be dressed in masculine clothing so he could be persuaded to have sex with them, and a broader fashion for ghulamiyyat (boy-like girls) is reflected in literature of the period. The same was said of Andalusian ruler al-Hakam II (r. 961–976).", "title": "History" }, { "paragraph_id": 6, "text": "The conceptions of homosexuality found in classical Islamic texts resemble the traditions of classical Greece and those of ancient Rome, rather than the modern understanding of sexual orientation. It was expected that many mature men would be sexually attracted to both women and adolescent boys (with different views about the appropriate age range for the latter), and such men were expected to wish to play only an active role in homosexual intercourse once they reached adulthood. However, any confident assessment of the actual incidence of homosexual behavior remains elusive. Preference for homosexual over heterosexual relations was regarded as a matter of personal taste rather than a marker of homosexual identity in a modern sense. While playing an active role in homosexual relations carried no social stigma beyond that of licentious behavior, seeking to play a passive role was considered both unnatural and shameful for a mature man. Following Greek precedents, the Islamic medical tradition only regarded this latter case as pathological, and showed no concern for other forms of homosexual behavior. As evident from an eleventh-century discussion among the scholars of Baghdad, some scholars who showed traits of bisexuality argued, that it is natural for a man to desire anal intercourse with a fellow man, but this would be only allowed in the afterlife.", "title": "History" }, { "paragraph_id": 7, "text": "The medieval Islamic concept of homoerotic relationships was distinct from modern concept of homosexuality, and related to the pederasty of Ancient Greece. During the early period, growth of a beard was considered to be the conventional age when an adolescent lost his homoerotic appeal, as evidenced by poetic protestations that the author still found his lover beautiful despite the growing beard. During later periods, the age of the stereotypical beloved became more ambiguous, and this prototype was often represented in Persian poetry by Turkic slave-soldiers. This trend is illustrated by the story of Mahmud of Ghazni (971–1030), the ruler of the Ghaznavid Empire, and his cupbearer Malik Ayaz. Their relationship started when Malik was a slave boy: \"At the time of the coins’ minting, Mahmud of Ghazni was in a passionate romantic relationship with his male slave Malik Ayaz, and had exalted him to various positions of power across the Ghazanid Empire. While the story of their love affair had been censored until recently — the result of Western colonialism and changing attitudes towards homosexuality in the Middle East — Jasmine explains how Ghazni's subjects saw their relationship as a higher form of love.\"", "title": "History" }, { "paragraph_id": 8, "text": "Other famous examples of homosexuality include the Aghlabid Emir Ibrahim II of Ifriqiya (ruled 875–902), who was said to have been surrounded by some sixty catamites, yet whom he was said to have treated in a most horrific manner. Caliph al-Mutasim in the 9th century and some of his successors were accused of homosexuality. The Christian martyr Pelagius of Córdoba was executed by Andalusian ruler Abd al-Rahman III because the boy refused his advances.", "title": "History" }, { "paragraph_id": 9, "text": "The 14th-century Iranian poet Obeid Zakani, in his scores of satirical stories and poems, ridiculed the contradiction between the strict legalistic prohibitions of homosexuality on the one hand and its common practice on the other. Following is an example from his Ressaleh Delgosha: “Two old men, who used to exchange sex since their very childhood, were making love on the top of a mosque’s minaret in the holy city of Qom. When both finished their turns, one told the other: “shameless practices have ruined our city.” The other man nodded and said, “You and I are the city’s blessed seniors, what then do you expect from others?”", "title": "History" }, { "paragraph_id": 10, "text": "European sources state that Mehmed the Conqueror, an Ottoman sultan from the 15th century, \"was known to have ambivalent sexual tastes, sent a eunuch to the house of Notaras, demanding that he supply his good-looking fourteen-year-old son for the Sultan's pleasure. When he refused, the Sultan instantly ordered the decapitation of Notaras, together with that of his son and his son-in-law; and their three heads … were placed on the banqueting table before him\". Another youth Mehmed found attractive, and who was presumably more accommodating, was Radu III the Fair, the brother of Vlad the Impaler: \"Radu, a hostage in Istanbul whose good looks had caught the Sultan's fancy, and who was thus singled out to serve as one of his most favored pages.\" After the defeat of Vlad, Mehmed placed Radu on the throne of Wallachia as a vassal ruler. However, some Turkish sources deny these stories.", "title": "History" }, { "paragraph_id": 11, "text": "According to the Encyclopedia of Islam and the Muslim World:", "title": "History" }, { "paragraph_id": 12, "text": "Whatever the legal strictures on sexual activity, the positive expression of male homoerotic sentiment in literature was accepted, and assiduously cultivated, from the late eighth century until modern times. First in Arabic, but later also in Persian, Turkish and Urdu, love poetry by men about boys more than competed with that about women, it overwhelmed it. Anecdotal literature reinforces this impression of general societal acceptance of the public celebration of male-male love (which hostile Western caricatures of Islamic societies in medieval and early modern times simply exaggerate).", "title": "History" }, { "paragraph_id": 13, "text": "European travellers remarked on the taste that Shah Abbas of Iran (1588–1629) had for wine and festivities, but also for attractive pages and cup-bearers. A painting by Riza Abbasi with homo-erotic qualities shows the ruler enjoying such delights.", "title": "History" }, { "paragraph_id": 14, "text": "According to Daniel Eisenberg, \"Homosexuality was a key symbolic issue throughout the Middle Ages in [Islamic] Iberia. As was customary everywhere until the nineteenth century, homosexuality was not viewed as a congenital disposition or 'identity'; the focus was on nonprocreative sexual practices, of which sodomy was the most controversial.\" For example, in al-Andalus \"homosexual pleasures were much indulged by the intellectual and political elite. Evidence includes the behavior of rulers . . . who kept male harems.\" Although early Islamic writings such as the Quran expressed a mildly negative attitude towards homosexuality, laypersons usually apprehended the idea with indifference, if not admiration. Few literary works displayed hostility towards non-heterosexuality, apart from partisan statements and debates about types of love (which also occurred in heterosexual contexts). Khaled el-Rouayheb (2014) maintains that \"much if not most of the extant love poetry of the period [16th to 18th century] is pederastic in tone, portraying an adult male poet's passionate love for a teenage boy\". In mystic writings of the medieval era, such as Sufi texts, it is \"unclear whether the beloved being addressed is a teenage boy or God.\" European chroniclers censured \"the indulgent attitudes to gay sex in the Caliphs' courts.\"", "title": "History" }, { "paragraph_id": 15, "text": "El-Rouayheb suggests that even though religious scholars considered sodomy as an abhorrent sin, most of them did not genuinely believe that it was illicit to merely fall in love with a boy or express this love via poetry. In secular society however, a male's desire to penetrate a desirable youth was seen as understandable, even if unlawful. On the other hand, men adopting the passive role were more subjected to stigma. The medical term ubnah qualified the pathological desire of a male to exclusively be on the receiving end of anal intercourse. Physicians that theorized on ubnah includes Rhazes, who thought that it was correlated with small genitals and that a treatment was possible provided that the subject was deemed to be not too effeminate and the behavior not \"prolonged\". Dawud al-Antaki advanced that it could have been caused by an acidic substance embedded in the veins of the anus, causing itchiness and thus the need to seek relief.", "title": "History" }, { "paragraph_id": 16, "text": "The 18th and 19th centuries saw the rise of Islamic fundamentalism such as Wahhabism, which came to call for stricter adherence to the Hadith. In 1744, Muhammad bin Saud, the tribal ruler of the town of Diriyah, endorsed ibn Abd al-Wahhab’s mission and the two swore an oath to establish a state together run according to true Islamic principles. For the next seventy years, until the dismantlement of the first state in 1818, the Wahhabis dominated from Damascus to Baghdad. Homosexuality, which had been largely tolerated in the Ottoman Empire, also became criminalized, and those found guilty were thrown to their deaths from the top of the minarets.", "title": "History" }, { "paragraph_id": 17, "text": "Homosexuality in the Ottoman Empire was decriminalized in 1858, as part of wider reforms during the Tanzimat. However, authors Lapidus and Salaymeh write that before the 19th century Ottoman society had been open and welcoming to homosexuals, and that by the 1850s via European influence they began censoring homosexuality in their society. In Iran, several hundred political opponents were executed in the aftermath of the 1979 Islamic Revolution and justified it by accusing them of homosexuality. Homosexual intercourse became a capital offense in Iran's Islamic Penal Code in 1991. Though the grounds for execution in Iran are difficult to track, there is evidence that several people were hanged for homosexual behavior in 2005–2006 and in 2016, mostly in cases of dubious charges of rape. In some countries like Iran and Iraq the dominant discourse is that Western imperialism has spread homosexuality. In Egypt, though homosexuality is not explicitly criminalized, it has been widely prosecuted under vaguely formulated \"morality\" laws. Under the current rule of Abdel Fattah el-Sisi, arrests of LGBT individuals have risen fivefold, apparently reflecting an effort to appeal to conservatives. In Uzbekistan, an anti-sodomy law, passed after World War II with the goal of increasing the birth rate, was invoked in 2004 against a gay rights activist, who was imprisoned and subjected to extreme abuse. In Iraq, where homosexuality is legal, the breakdown of law and order following the Second Gulf War allowed Islamist militias and vigilantes to act on their prejudice against gays, with ISIS gaining particular notoriety for the gruesome acts of anti-LGBT violence committed under its rule of parts of Syria and Iraq. Scott Siraj al-Haqq Kugle has argued that while Muslims \"commemorate the early days of Islam when they were oppressed as a marginalized few,\" many of them now forget their history and fail to protect \"Muslims who are gay, transgender and lesbian.\"", "title": "History" }, { "paragraph_id": 18, "text": "According to Georg Klauda, in the 19th and early 20th century, homosexual sexual contact was viewed as relatively commonplace in parts of the Middle East, owing in part to widespread sex segregation, which made heterosexual encounters outside marriage more difficult. Klauda states that \"Countless writers and artists such as André Gide, Oscar Wilde, Edward M. Forster, and Jean Genet made pilgrimages in the 19th and 20th centuries from homophobic Europe to Algeria, Morocco, Egypt, and various other Arab countries, where homosexual sex was not only met without any discrimination or subcultural ghettoization whatsoever, but rather, additionally as a result of rigid segregation of the sexes, seemed to be available on every corner.\" Views about homosexuality have never been universal all across the Islamic world. With reference to the Muslim world more broadly, Tilo Beckers writes that \"Besides the endogenous changes in the interpretation of scriptures having a deliberalizing influence that came from within Islamic cultures, the rejection of homosexuality in Islam gained momentum through the exogenous effects of European colonialism, that is, the import of Western cultural understandings of homosexuality as a perversion.\" University of Münster professor Thomas Bauer points that even though there were many orders of stoning for homosexuality, there is not a single proven case of it being carried out. Bauer continues that \"Although contemporary Islamist movements decry homosexuality as a form of Western decadence, the current prejudice against it among Muslim publics stems from an amalgamation of traditional Islamic legal theory with popular notions that were imported from Europe during the colonial era, when Western military and economic superiority made Western notions of sexuality particularly influential in the Muslim world.\"", "title": "History" }, { "paragraph_id": 19, "text": "In some Muslim-majority countries, current anti-LGBT laws were enacted by United Kingdom or Soviet organs and retained following independence. The 1860 Indian Penal Code, which included an anti-sodomy statute, was used as a basis of penal laws in other parts of the empire. However, as Dynes and Donaldson point out, North African countries under French colonial tutelage lacked anti-homosexual laws which were only born afterwards, with the full weight of Islamic opinion descending on those who, on the model of the gay liberationists of the West, would seek to make \"homosexuality\" (above all, adult men taking passive roles) publicly respectable. Jordan, Bahrain, and - more recently - India, a country with a substantial Muslim minority, have abolished the criminal penalties for consensual homosexual acts introduced under colonial rule. Persecution of homosexuals has been exacerbated in recent decades by a rise in Islamic fundamentalism and the emergence of the gay-rights movement in the West, which allowed Islamists to paint homosexuality as a noxious Western import.", "title": "History" }, { "paragraph_id": 20, "text": "The Quran contains several allusions to homosexual activity, which has prompted considerable exegetical and legal commentaries over the centuries. The subject is most clearly addressed in the story of Sodom and Gomorrah (seven verses) after the men of the city demand to have sex with the (seemingly male) messengers sent by God to the prophet Lot (or Lut). The Quranic narrative largely conforms to that found in Genesis. In one passage the Quran says that the men \"solicited his guests of him\" (Quran 54:37), using an expression that parallels phrasing used to describe the attempted seduction of Joseph, and in multiple passages they are accused of \"coming with lust\" to men instead of women (or their wives). The Quran terms this lewdness or fahisha (Arabic: فاحشة, romanized: fāḥiša) unprecedented in the history of the world:", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 21, "text": "And ˹remember˺ when Lot scolded ˹the men of˺ his people, ˹saying,˺ “Do you commit a shameful deed that no man has ever done before? You lust after men instead of women! You are certainly transgressors.” But his people’s only response was to say, “Expel them from your land! They are a people who wish to remain chaste!” So We saved him and his family except his wife, who was one of the doomed. We poured upon them a rain ˹of brimstone˺. See what was the end of the wicked!", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 22, "text": "The destruction of the \"people of Lut\" is thought to be explicitly associated with their sexual practices. Later exegetical literature built on these verses as writers attempted to give their own views as to what went on; and there was general agreement among exegetes that the \"lewdness\" alluded to by the Quranic passages was attempted sodomy (specifically anal intercourse) between men.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 23, "text": "Some Muslim scholars, such as the Ẓāhirī scholar (literalist) ibn Ḥazm, argue that the \"people of Lut\" were destroyed not because of participation in homosexuality per se, but because of disregarding Prophets and messengers and attempting to rape one of them.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 24, "text": "The sins of the \"people of Lut\" (Arabic: لوط) subsequently became proverbial and the Arabic words for the act of anal sex between men such as liwat (Arabic: لواط, romanized: liwāṭ) and for a person who performs such acts (Arabic: لوطي, romanized: lūṭi) both derive from his name, although Lut was not the one demanding sex.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 25, "text": "Some Western and Modern Islamic scholars argue that in the course of the Quranic Lot story, homosexuality in the modern sense is not addressed, but that the destruction of the \"people of Lut\" was a result of breaking the ancient hospitality law and sexual violence, in this case they attempted rape of men.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 26, "text": "Only one passage in the Quran prescribes a strictly legal position. It is not restricted to homosexual behaviour, however, and deals more generally with zina (illicit sexual intercourse):", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 27, "text": "˹As for˺ those of your women who commit illegal intercourse—call four witnesses from among yourselves. If they testify, confine the offenders to their homes until they die or Allah ordains a ˹different˺ way for them. And the two among you who commit this sin—discipline them. If they repent and mend their ways, relieve them. Surely Allah is ever Accepting of Repentance, Most Merciful.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 28, "text": "In the exegetical Islamic literature, this verse has provided the basis for the view that Muhammad took a lenient approach towards male homosexual practices. The Orientalist scholar Pinhas Ben Nahum has argued that \"it is obvious that the Prophet viewed the vice with philosophic indifference. Not only is the punishment not indicated—it was probably some public reproach or insult of a slight nature—but mere penitence sufficed to escape the punishment\". Most exegetes hold that these verses refer to illicit heterosexual relationships, although a minority view attributed to the Mu'tazilite scholar Abu Muslim al-Isfahani interpreted them as referring to homosexual relations. This view was widely rejected by medieval scholars, but has found some acceptance in modern times.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 29, "text": "Some Quranic verses describing the Islamic paradise refer to perpetually youthful attendants which inhabit it, and they are described as both male and female servants: the females are referred to as ḥūr, whereas the males are referred to as ghilmān, wildān, and suqāh. The slave boys are referred to in the Quran as \"immortal boys\" (56:17, 76:19) or \"young men\" (52:24) who serve wine and meals to the blessed. Although the tafsir literature does not interpret this as a homoerotic allusion, the connection was made in other literary genres, mostly humorously. For example, the Abbasid-era poet Abu Nuwas wrote:", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 30, "text": "A beautiful lad came carrying the wine With smooth hands and fingers dyed with henna And with long hair of golden curls around his cheeks ... I have a lad who is like the beautiful lads of paradise", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 31, "text": "And his eyes are big and beautiful", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 32, "text": "Jurists of the Hanafi school took up the question seriously, considering, but ultimately rejecting the suggestion that homosexual pleasures were, like wine, forbidden in this world but enjoyed in the afterlife. Ibn 'Âbidîn's Hâshiya refers to a debate among the scholars of Baghdad in the eleventh century, that some scholars argued in favor of that analogy. This was opposed by those who found anal intercourse repulsive.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 33, "text": "The hadith (sayings and actions attributed to Muhammad) show that homosexual behaviour was not unknown in seventh-century Arabia. However, given that the Quran did not specify the punishment of homosexual practices, Islamic jurists increasingly turned to several \"more explicit\" hadiths in an attempt to find guidance on appropriate punishment.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 34, "text": "From Abu Musa al-Ash'ari, the Prophet states that: \"If a woman comes upon a woman, they are both adulteresses, if a man comes upon a man, then they are both adulterers.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 35, "text": "While there are no reports relating to homosexuality in the best known and authentic hadith collections of Sahih al-Bukhari and Sahih Muslim, other canonical collections record a number of condemnations of the \"act of the people of Lut\" (male-to-male anal intercourse). For example, Abu 'Isa Muhammad ibn 'Isa at-Tirmidhi (compiling the Sunan al-Tirmidhi around 884) wrote that Muhammad had indeed prescribed the death penalty for both the active and passive partners:", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 36, "text": "Narrated by Abdullah ibn Abbas: \"The Prophet said: 'If you find anyone doing as Lot's people did, kill the one who does it, and the one to whom it is done'.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 37, "text": "Narrated Abdullah ibn Abbas: \"If a man who is not married is seized committing sodomy he will be stoned to death.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 38, "text": "Ibn al-Jawzi (1114–1200), writing in the 12th century, claimed that Muhammad had cursed \"sodomites\" in several hadith, and had recommended the death penalty for both the active and passive partners in homosexual acts.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 39, "text": "It was narrated that Ibn Abbas said: \"The Prophet said: '... cursed is the one who does the action of the people of Lot'.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 40, "text": "Ahmad narrated from Ibn Abbas that the Prophet of Allah said: 'May Allah curse the one who does the action of the people of Lot, may Allah curse the one who does the action of the people of Lot', three times.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 41, "text": "Al-Nuwayri (1272–1332), writing in the 13th century, reported in his Nihaya that Muhammad is \"alleged to have said what he feared most for his community were the practices of the people of Lot (he seems to have expressed the same idea in regard to wine and female seduction).\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 42, "text": "It was narrated that Jabir: \"The Prophet said: 'There is nothing I fear for my followers more than the deed of the people of Lot.'\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 43, "text": "According to Oliver Leaman, other hadiths seem to permit homoerotic feelings as long as they are not translated into action. However, in one hadith attributed to Muhammad himself, which exists in multiple variants, the Islamic prophet acknowledged homoerotic temptation towards young boys and warned his Companions against it: \"Do not gaze at the beardless youths, for verily they have eyes more tempting than the houris\" or \"... for verily they resemble the houris\". These beardless youths are also described as wearing sumptuous robes and having perfumed hair. Consequently, Islamic religious leaders, skeptical of Muslim men's capacity of self-control over their sexual urges, have forbidden looking and yearning both at males and females.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 44, "text": "In addition, there is a number of \"purported (but mutually inconsistent) reports\" (athar) of punishments of sodomy ordered by some of the early caliphs. Abu Bakr apparently recommended toppling a wall on the culprit, or else burning him alive, while Ali ibn Abi Talib is said to have ordered death by stoning for one sodomite and had another thrown head-first from the top of the highest building in the town; according to Ibn Abbas, the latter punishment must be followed by stoning.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 45, "text": "There are, however, fewer hadith mentioning homosexual behaviour in women; but punishment (if any) for lesbianism was not clarified.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 46, "text": "In Classical Arabic and Islamic literature, the plural term mukhannathun (singular: mukhannath) was a term used to describe gender-variant people, and it has typically referred to effeminate men or people with ambiguous sexual characteristics, who appeared feminine and functioned sexually or socially in roles typically carried out by women. According to the Iranian scholar Mehrdad Alipour, \"in the premodern period, Muslim societies were aware of five manifestations of gender ambiguity: This can be seen through figures such as the khasi (eunuch), the hijra, the mukhannath, the mamsuh and the khuntha (hermaphrodite/intersex).\" Gender specialists Aisya Aymanee M. Zaharin and Maria Pallotta-Chiarolli give the following explanation of the meaning of the term mukhannath and its derivate Arabic forms in the hadith literature:", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 47, "text": "Various academics such as Alipour (2017) and Rowson (1991) point to references in the Hadith to the existence of mukhannath: a man who carries femininity in his movements, in his appearance, and in the softness of his voice. The Arabic term for a trans woman is mukhannith as they want to change their sex characteristics, while mukhannath presumably do not/have not. The mukhannath or effeminate man is obviously male, but naturally behaves like a female, unlike the khuntha, an intersex person, who could be either male or female. Ironically, while there is no obvious mention of mukhannath, mukhannith, or khuntha in the Qur’ān, this holy book clearly recognizes that there are some people, who are neither male nor female, or are in between, and/or could also be “non-procreative” [عَقِيم] (Surah 42 Ash-Shuraa, verse 49–50).", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 48, "text": "Moreover, within Islam, there is a tradition of the elaboration and refinement of extended religious doctrines through scholarship. This doctrine contains a passage by the scholar and hadith collector An-Nawawi:", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 49, "text": "A mukhannath is the one (\"male\") who carries in his movements, in his appearance and in his language the characteristics of a woman. There are two types; the first is the one in whom these characteristics are innate, he did not put them on by himself, and therein is no guilt, no blame and no shame, as long as he does not perform any (illicit) act or exploit it for money (prostitution etc.). The second type acts like a woman out of immoral purposes and he is the sinner and blameworthy.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 50, "text": "The hadith collection of Bukhari (compiled in the 9th century from earlier oral traditions) includes a report regarding mukhannathun, effeminate men who were granted access to secluded women's quarters and engaged in other non-normative gender behavior: This hadiths attributed to Muhammad's wives, a mukhannath in question expressed his appreciation of a woman's body and described it for the benefit of another man. According to one hadith, this incident was prompted by a mukhannath servant of Muhammad's wife Umm Salama commenting upon the body of a woman and following that, Muhammad cursed the mukhannathun and their female equivalents, mutarajjilat and ordered his followers to remove them from their homes.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 51, "text": "Aisha says: A mukhannath used to enter upon the wives of Prophet. They (the people) counted him among those who were free of physical needs. One day the Prophet entered upon us when he was with one of his wives, and was describing the qualities of a woman, saying: When she comes forward, she comes forward with four (folds of her stomach), and when she goes backward, she goes backward with eight (folds of her stomach). The Prophet said: Do I not see that this one knows what here lies. Then they (the wives) observed veil from him.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 52, "text": "Narrated by Abdullah ibn Abbas: The Prophet cursed effeminate men; those men who are in the similitude (assume the manners of women) and those women who assume the manners of men, and he said, \"Turn them out of your houses.\" The Prophet turned out such-and-such man, and 'Umar turned out such-and-such woman.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 53, "text": "Early Islamic literature rarely comments upon the habits of the mukhannathun. It seems there may have been some variance in how \"effeminate\" they were, though there are indications that some adopted aspects of feminine dress or at least ornamentation. One hadith states that a Muslim mukhannath who had dyed his hands and feet with henna (traditionally a feminine activity) was banished from Medina, but not killed for his behavior.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 54, "text": "A mukhannath who had dyed his hands and feet with henna was brought to the Prophet. He asked: What is the matter with this man? He was told: Apostle of Allah! he affects women's get-up. So he ordered regarding him and he was banished to an-Naqi'. The people said: Apostle of Allah! should we not kill him? He said: I have been prohibited from killing people who pray. AbuUsamah said: Naqi' is a region near Medina and not a Baqi.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 55, "text": "Other hadiths also mention the punishment of banishment, both in connection with Umm Salama's servant and a man who worked as a musician. Muhammad described the musician as a mukhannath and threatened to banish him if he did not end his unacceptable career.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 56, "text": "According to Everett K. Rowson, professor of Middle Eastern and Islamic Studies at New York University, none of the sources state that Muhammad banished more than two mukhannathun, and it is not clear to what extent the action was taken because of their breaking of gender rules in itself or because of the \"perceived damage to social institutions from their activities as matchmakers and their corresponding access to women\".", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 57, "text": "The scarcity of concrete prescriptions from hadith and the contradictory nature of information about the actions of early authorities resulted in the lack of agreement among classical jurists as to how homosexual activity should be treated. Classical Islamic jurists did not deal with homosexuality as a sexual orientation, since the latter concept is modern and has no equivalent in traditional law, which dealt with it under the technical terms of liwat and zina.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 58, "text": "Broadly, traditional Islamic law took the view that homosexual activity could not be legally sanctioned because it takes place outside religiously recognised marriages. All major schools of law consider liwat (anal sex) as a punishable offence. Most legal schools treat homosexual intercourse with penetration similarly to unlawful heterosexual intercourse under the rubric of zina, but there are differences of opinion with respect to methods of punishment. Some legal schools \"prescribed capital punishment for sodomy, but others opted only for a relatively mild discretionary punishment.\" The Hanbalites are the most severe among Sunni schools, insisting on capital punishment for anal sex in all cases, while the other schools generally restrict punishment to flagellation with or without banishment, unless the culprit is muhsan (Muslim free married adult), and Hanafis often suggest no physical punishment at all, leaving the choice to the judge's discretion. The founder of the Hanafi school Abu Hanifa refused to recognize the analogy between sodomy and zina, although his two principal students disagreed with him on this point. The Hanafi scholar Abu Bakr Al-Jassas (d. 981 AD/370 AH) argued that the two hadiths on killing homosexuals \"are not reliable by any means and no legal punishment can be prescribed based on them\". Where capital punishment is prescribed and a particular method is recommended, the methods range from stoning (Hanbali, Maliki), to the sword (some Hanbalites and Shafi'ites), or leaving it to the court to choose between several methods, including throwing the culprit off a high building (Shi'ite).", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 59, "text": "For unclear reasons, the treatment of homosexuality in Twelver Shi'ism jurisprudence is generally harsher than in Sunni fiqh, while Zaydi and Isma'ili Shia jurists took positions similar to the Sunnis. Where flogging is prescribed, there is a tendency for indulgence and some recommend that the prescribed penalty should not be applied in full, with Ibn Hazm reducing the number of strokes to 10. There was debate as to whether the active and passive partners in anal sex should be punished equally. Beyond penetrative anal sex, there was \"general agreement\" that \"other homosexual acts (including any between females) were lesser offenses, subject only to discretionary punishment.\" Some jurists viewed sexual intercourse as possible only for an individual who possesses a phallus; hence those definitions of sexual intercourse that rely on the entry of as little of the corona of the phallus into a partner's orifice. Since women do not possess a phallus and cannot have intercourse with one another, they are, in this interpretation, physically incapable of committing zinā.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 60, "text": "Since a hadd punishment for zina requires testimony from four witnesses of the actual act of penetration or a confession from the accused repeated four times, the legal criteria for the prescribed harsh punishments of homosexual acts were very difficult to fulfill. The debates of classical jurists are \"to a large extent theoretical, since homosexual relations have always been tolerated\" in pre-modern Islamic societies. While it is difficult to determine to what extent the legal sanctions were enforced in different times and places, historical record suggests that the laws were invoked mainly in cases of rape or other \"exceptionally blatant infringement on public morals\". Documented instances of prosecution for homosexual acts are rare, and those which followed legal procedure prescribed by Islamic law are even rarer.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 61, "text": "In Kecia Ali's book, she cites that \"contemporary scholars disagree sharply about the Qur'anic perspective on same-sex intimacy.\" One scholar represents the conventional perspective by arguing that the Qur'an \"is very explicit in its condemnation of homosexuality leaving scarcely any loophole for a theological accommodation of homosexuality in Islam.\" Another scholar argues that \"the Qur'an does not address homosexuality or homosexuals explicitly.\" Overall, Ali says that \"there is no one Muslim perspective on anything.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 62, "text": "Many Muslim scholars have followed a \"don't ask, don't tell\" policy in regards to homosexuality in Islam, by treating the subject with passivity.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 63, "text": "Mohamed El-Moctar El-Shinqiti, director of the Islamic Center of South Plains in Texas, has argued that \"[even though] homosexuality is a grievous sin...[a] no legal punishment is stated in the Qur'an for homosexuality...[b] it is not reported that Prophet Muhammad has punished somebody for committing homosexuality...[c] there is no authentic hadith reported from the Prophet prescribing a punishment for the homosexuals...\" Classical hadith scholars such as Al-Bukhari, Yahya ibn Ma'in, Al-Nasa'i, Ibn Hazm, Al-Tirmidhi, and others have disputed the authenticity of hadith reporting these statements.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 64, "text": "Egyptian Islamist journalist Muhammad Jalal Kishk also found no punishment for homosexual acts prescribed in the Quran, regarding the hadith that mentioned it as poorly attested. He did not approve of such acts, but believed that Muslims who abstained from sodomy would be rewarded by sex with youthful boys in paradise.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 65, "text": "Faisal Kutty, a professor of Islamic law at Indiana-based Valparaiso University Law School and Toronto-based Osgoode Hall Law School, commented on the contemporary same-sex marriage debate in a 27 March 2014 essay in the Huffington Post. He acknowledged that while Islamic law iterations prohibit pre- and extra-marital as well as same-sex sexual activity, it does not attempt to \"regulate feelings, emotions and urges, but only its translation into action that authorities had declared unlawful\". Kutty, who teaches comparative law and legal reasoning, also wrote that many Islamic scholars have \"even argued that homosexual tendencies themselves were not haram [prohibited] but had to be suppressed for the public good\". He claimed that this may not be \"what the LGBTQ community wants to hear\", but that, \"it reveals that even classical Islamic jurists struggled with this issue and had a more sophisticated attitude than many contemporary Muslims\". Kutty, who in the past wrote in support of allowing Islamic principles in dispute resolution, also noted that \"most Muslims have no problem extending full human rights to those—even Muslims—who live together 'in sin'\". He argued that it therefore seems hypocritical to deny fundamental rights to same-sex couples. Moreover, he concurred with Islamic legal scholar Mohamed Fadel in arguing that this is not about changing Islamic marriage (nikah), but about making \"sure that all citizens have access to the same kinds of public benefits\".", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 66, "text": "Scott Siraj al-Haqq Kugle, a professor of Islamic Studies at Emory University, has argued for a different interpretation of the Lot narrative focusing not on the sexual act but on the infidelity of the tribe and their rejection of Lot's Prophethood. According to Kugle, \"where the Qur'an treats same-sex acts, it condemns them only so far as they are exploitive or violent.\" More generally, Kugle notes that the Quran refers to four different levels of personality. One level is \"genetic inheritance.\" The Qur'an refers to this level as one's \"physical stamp\" that \"determines one's temperamental nature\" including one's sexuality. On the basis of this reading of the Qur'an, Kugle asserts that homosexuality is \"caused by divine will,\" so \"homosexuals have no rational choice in their internal disposition to be attracted to same-sex mates.\" Kugle argues that if the classical commentators had seen \"sexual orientation as an integral aspect of human personality,\" they would have read the narrative of Lot and his tribe \"as addressing male rape of men in particular\" and not as \"addressing homosexuality in general.\" Kugle furthermore reads the Qur'an as holding \"a positive assessment of diversity.\" Under this reading, Islam can be described as \"a religion that positively assesses diversity in creation and in human societies,\" allowing gay and lesbian Muslims to view homosexuality as representing the \"natural diversity in sexuality in human societies.\" A critique of Kugle's approach, interpretations and conclusions was published in 2016 by Mobeen Vaid.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 67, "text": "In a 2012 book, Aisha Geissinger writes that there are \"apparently irreconcilable Muslim standpoints on same-sex desires and acts,\" all of which claim \"interpretative authenticity.\" One of these standpoints results from \"queer-friendly\" interpretations of the Lot story and the Quran. The Lot story is interpreted as condemning \"rape and inhospitality rather than today's consensual same-sex relationships.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 68, "text": "In their book Islamic Law and Muslim Same-Sex Unions, Junaid Jahangir and Hussein Abdullatif argue that interpretations which view the Quranic narrative of the people of Lot and the derived classical notion of liwat as applying to same-sex relationships reflect the sociocultural norms and medical knowledge of societies that produced those interpretations. They further argue that the notion of liwat is compatible with the Quranic narrative, but not with the contemporary understanding of same-sex relationships based on love and shared responsibilities.", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 69, "text": "In his 2010 article Sexuality and Islam, Abdessamad Dialmy addressed \"sexual norms defined by the sacred texts (Koran and Sunna).\" He wrote that \"sexual standards in Islam are paradoxical.\" The sacred texts \"allow and actually are an enticement to the exercise of sexuality.\" However, they also \"discriminate ... between heterosexuality and homosexuality.\" Islam's paradoxical standards result in \"the current back and forth swing of sexual practices between repression and openness.\" Dialmy sees a solution to this back and forth swing by a \"reinterpretation of repressive holy texts.\"", "title": "Scripture and Islamic jurisprudence" }, { "paragraph_id": 70, "text": "According to the International Lesbian and Gay Association (ILGA) seven countries still retain capital punishment for homosexual behavior: Saudi Arabia, Yemen, Iran, Afghanistan, Mauritania, northern Nigeria, and the United Arab Emirates. Afghanistan also has the death penalty for homosexuality since the 2021 Taliban takeover. In Qatar, Algeria, Uzbekistan, and the Maldives, homosexuality is punished with time in prison or a fine. This has led to controversy regarding Qatar, which hosted the 2022 FIFA World Cup. In 2010, human rights groups questioned the awarding of hosting rights to Qatar, due to concerns that gay football fans may be jailed. In response, Sepp Blatter, head of FIFA, joked that they would have to \"refrain from sexual activity\" while in Qatar. He later withdrew the remarks after condemnation from rights groups.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 71, "text": "Same-sex sexual activity is illegal in Chad since 1 August 2017 under a new penal code. Before that, homosexuality between consenting adults had not been criminalized ever prior to this law.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 72, "text": "In Egypt, openly gay men have been prosecuted under general public morality laws. (See Cairo 52.) \"Sexual relations between consenting adult persons of the same sex in private are not prohibited as such. However, the Law on the Combating of Prostitution, and the law against debauchery have been used to imprison gay men in recent years.\" An Egyptian TV host was recently sentenced to a year in prison for interviewing a gay man in January 2019.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 73, "text": "The Sunni Islamist militant group and Salafi-jihadist terrorist organization ISIL/ISIS/IS/Daesh, which invaded and claimed parts of Iraq and Syria between 2014 and 2017, enacted the political and religious persecution of LGBT people and decreed capital punishment for them. ISIL/ISIS/IS/Daesh terrorists have executed more than two dozen men and women for suspected homosexual activity, including several thrown off the top of buildings in highly publicized executions.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 74, "text": "In India, which has the third-largest Muslim population in the world, and where Islam is the largest minority religion, the largest Islamic seminary (Darul Uloom Deoband) has vehemently opposed recent government moves to abrogate and liberalize laws from the colonial era that banned homosexuality. As of September 2018, homosexuality is no longer a criminal act in India, and most of the religious groups withdrew their opposing claims against it in the Supreme Court.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 75, "text": "In Iraq, homosexuality is allowed by the government, but terrorist groups often carry out illegal executions of gay people. Saddam Hussein was \"unbothered by sexual mores.\" Ali Hili reports that \"since the 2003 invasion more than 700 people have been killed because of their sexuality.\" He calls Iraq the \"most dangerous place in the world for sexual minorities.\"", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 76, "text": "In Jordan, where homosexuality is legal, \"gay hangouts have been raided or closed on bogus charges, such as serving alcohol illegally.\" Despite this legality, social attitudes towards homosexuality are still hostile and hateful.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 77, "text": "In Pakistan, its law is a mixture of both British colonial law as well as Islamic law, both which proscribe criminal penalties for same-sex sexual acts. The Pakistan Penal Code of 1860, originally developed under colonial rule, punishes sodomy with a possible prison sentence. Yet, the more likely situation for gay and bisexual men is sporadic police fines, and jail sentences.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 78, "text": "In Bangladesh, homosexual acts are illegal and punishable according to section 377. In 2009 and 2013, the Bangladeshi Parliament refused to overturn Section 377.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 79, "text": "In Saudi Arabia, the maximum punishment for homosexual acts is public execution by beheading.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 80, "text": "In Malaysia, homosexual acts are illegal and punishable with jail, fine, deportation, whipping or chemical castration. In October 2018, Prime Minister Mahathir Mohamad stated that Malaysia would not \"copy\" Western nations' approach towards LGBT rights, indicating that these countries were exhibiting a disregard for the institutions of the traditional family and marriage, as the value system in Malaysia is good. In May 2019, in response to the warning of George Clooney about intending to impose death penalty for homosexuals like Brunei, the Deputy Foreign Minister Marzuki Yahya pointed out that Malaysia does not kill gay people, and will not resort to killing sexual minorities. He also said, although such lifestyles deviate from Islam, the government would not impose such a punishment on the group.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 81, "text": "In Indonesia, the country do not have a sodomy law and do not currently criminalize private, non-commercial homosexual acts among consenting adults, except in the Aceh province where homosexuality is illegal for Muslims under Islamic Sharia law, and punishable by flogging. While not criminalising homosexuality, the country does not recognise same-sex marriage. In July 2015, the Minister of Religious Affairs stated that it is difficult in Indonesia to legalize Gay Marriage, because strongly held religious norms speak strongly against it. According to some jurists, there should be death stoning penalty for homosexuals. While another group consider flogging with 100 lashes is the correct punishment.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 82, "text": "In Turkey, homosexuality is legal, but \"official censure can be fierce\". A former interior minister, İdris Naim Şahin, called homosexuality an example of \"dishonour, immorality and inhuman situations\". Turkey held its 16th Gay Pride Parade in Istanbul on 30 June 2019.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 83, "text": "As the latest addition in the list of criminalizing Muslim countries, Brunei's has implemented penalty for homosexuals within Sharia Penal Code in stages since 2014. It prescribes death by stoning as punishment for sex between men, and sex between women is punishable by caning or imprisonment. The sultanate currently has a moratorium in effect on death penalty.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 84, "text": "All nations currently having capital punishment as a potential penalty for homosexual activity are Muslim-majority countries and base those laws on interpretations of Islamic teachings. In 2020, the International Lesbian, Gay, Bisexual, Trans and Intersex Association (ILGA) released its most recent State Sponsored Homophobia Report. The report found that eleven countries or regions impose the death penalty for \"same-sex sexual acts\" with reference to sharia-based laws. In Iran, according to article 129 and 131 there are up to 100 lashes of whip first three times and fourth time death penalty for lesbians. The death penalty is implemented nationwide in Brunei, Iran, Saudi Arabia, Afghanistan, Yemen, northern Nigeria, United Arab Emirates, Mauritania and Somalia. This punishment is also allowed by the law but not implemented in Qatar, and Pakistan; and was back then implemented through non-state courts by ISIS in parts of Iraq and Syria (now no longer existing).", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 85, "text": "Due to Brunei's law dictating that gay sex be punishable by stoning, many of its targeted citizens fled to Canada in hopes of finding refuge. The law is also set to impose the same punishment for adultery among heterosexual couples. Despite pushback from citizens in the LGBTQ+ community, Brunei prime minister's office produced a statement explaining Brunei's intention for carrying through with the law. It has been suggested that this is part of a plan to separate Brunei from the western world and towards a Muslim one.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 86, "text": "In the Chechen Republic, a part of the Russian Federation, Ramzan Kadyrov has actively discriminated against homosexual individuals and presided over a campaign of arbitrary detention and extrajudicial killing. It has been suggested that \"to counteract popular support for an Islamist insurgency that erupted after the Soviet breakup, President Vladimir V. Putin of Russia has granted wide latitude to [Kadyrov] to co-opt elements of the Islamist agenda, including an intolerance of gays.\" Reports of the discrimination in Chechnya have in turn been used to stoke Islamophobia, racist, and anti-Russia rhetoric. Jessica Stern, executive director of OutRight Action International, has criticized this bigotry, noting: “Using a violent attack on men accused of being gay to legitimize islamophobia is dangerous and misleading. It negates the experiences of queer muslims and essentializes all muslims as homophobic. We cannot permit this tragedy to be co-opted by ethno-nationalists to perpetuate anti-Muslim or anti-Russian sentiment. The people and their government are never the same.”", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 87, "text": "In Algeria, Bangladesh, Chad, Morocco, Aceh, Maldives, Oman, Pakistan, Qatar, Syria, and Tunisia, it is illegal, and penalties may be imposed. In Kuwait, Turkmenistan and Uzbekistan, homosexual acts between males are illegal, but homosexual relations between females are legal.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 88, "text": "Same-sex sexual intercourse is legal in Albania, Azerbaijan, Bahrain, Bosnia and Herzegovina, Burkina Faso, Djibouti (de jure), Guinea-Bissau, Iraq (de jure), Jordan, Kazakhstan, Kosovo, Kyrgyzstan, Mali, Niger, Tajikistan, Turkey, West Bank (State of Palestine), Indonesia, and in Northern Cyprus. In Albania and Turkey, there have been discussions about legalizing same-sex marriage. Albania, Northern Cyprus, Bosnia and Herzegovina and Kosovo also protect LGBT people with anti-discrimination laws.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 89, "text": "In Lebanon, courts have ruled that the country's penal code must not be used to target homosexuals, but the law has yet to be changed by parliament.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 90, "text": "In 2007, there was a gay party in the Moroccan town of al-Qasr al-Kabir. Rumours spread that this was a gay marriage and more than 600 people took to the streets, condemning the alleged event and protesting against leniency towards homosexuals. Several persons who attended the party were detained and eventually six Moroccan men were sentenced to between four and ten months in prison for \"homosexuality\".", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 91, "text": "In France, there was an Islamic same-sex marriage on 18 February 2012. In Paris in November 2012 a room in a Buddhist prayer hall was used by gay Muslims and called a \"gay-friendly mosque\", and a French Islamic website is supporting religious same-sex marriage. The French overseas department of Mayotte, which has a majority-Muslim population, legalized same-sex marriage in 2013, along with the rest of France.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 92, "text": "The first American Muslim in the United States Congress, Keith Ellison (D-MN) said in 2010 that all discrimination against LGBT people is wrong. He further expressed support for gay marriage stating:", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 93, "text": "I believe that the right to marry someone who you please is so fundamental it should not be subject to popular approval any more than we should vote on whether blacks should be allowed to sit in the front of the bus.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 94, "text": "In 2014, eight men were jailed for three years by a Cairo court after the circulation of a video of them allegedly taking part in a private wedding ceremony between two men on a boat on the Nile.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 95, "text": "In the late 1980s, Mufti Muhammad Sayyid Tantawy of Egypt issued a fatwa supporting the right for those who fit the description of mukhannathun and mukhannathin to have sex reassignment surgery; Ayatollah Khomeini of Iran issued similar fatwas around the same time. Khomeini's initial fatwa concerned intersex individuals as well, but he later specified that sex reassignment surgery was also permissible in the case of transgender individuals. Because homosexuality is illegal in Iran but gender transition is legal, some gay individuals have been forced to undergo sex reassignment surgery and transition into the opposite sex, regardless of their actual gender identity.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 96, "text": "While Iran has outlawed homosexuality, Iranian thinkers such as Ayatollah Khomeini have allowed for transgender people to change their sex so that they can enter heterosexual relationships. Iran is the only Muslim-majority country in the Persian Gulf region that allows transgender people to express themselves by recognizing their self-identified gender and subsidizing reassignment surgery. Despite this, those who do not commit to reassignment surgery are not accepted to be trans.The government even provides up to half the cost for those needing financial assistance and a sex change is recognized on the birth certificate.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 97, "text": "In Pakistan, transgender people make up 0.005 percent of the total population. Previously, transgender people were isolated from society and had no legal rights or protections. They also suffered discrimination in healthcare services. For example, in 2016 a transgender individual died in a hospital while doctors were trying to decide which ward the patient should be placed in. Transgender people also faced discrimination in finding employment resulting from incorrect identity cards and incongruous legal status. Many were forced into poverty, dancing, singing, and begging on the streets to scrape by. On 26 June 2016, clerics affiliated to the Pakistan-based organization Tanzeem Ittehad-i-Ummat issued a fatwa on transgender people where a trans woman (born male) with \"visible signs of being a woman\" is allowed to marry a man, and a trans man (born female) with \"visible signs of being a man\" is allowed to marry a woman. Pakistani transgender persons can also change their (legal) sex. Muslim ritual funerals also apply. Depriving transgender people of their inheritance, humiliating, insulting or teasing them were also declared haraam. In May 2018, the Pakistani parliament passed a bill giving transgender individuals the right to choose their legal sex and correct their official documents, such as ID cards, driver licenses, and passports. Today, transgender people in Pakistan have the right to vote and to search for a job free from discrimination. As of 2018, one transgender woman became a news anchor, and two others were appointed to the Supreme Court.", "title": "Modern laws in Muslim-majority countries " }, { "paragraph_id": 98, "text": "The Muslim community as a whole, worldwide, has become polarized on the subject of homosexuality. Some Muslims say that \"no good Muslim can be gay\", and \"traditional schools of Islamic law consider homosexuality a grave sin\". At the opposite pole, \"some Muslims . . . are welcoming what they see as an opening within their communities to address anti-gay attitudes.\" Especially, it is \"young Muslims\" who are \"increasingly speaking out in support of gay rights\".", "title": "Public opinion among Muslims" }, { "paragraph_id": 99, "text": "According to the Albert Kennedy Trust, one in four young homeless people identify as LGBT due to their religious parents disowning them. The Trust suggests that the majority of individuals who are homeless due to religious out casting are either Christian or Muslim. Many young adults who come out to their parents are often forced out of the house to find refuge in a more accepting place. This leads many individual to be homeless or even attempt suicide.", "title": "Public opinion among Muslims" }, { "paragraph_id": 100, "text": "In 2013, the Pew Research Center conducted a study on the global acceptance of homosexuality and found a widespread rejection of homosexuality in many nations that are predominantly Muslim. In some countries, views were becoming more conservative among younger people.", "title": "Public opinion among Muslims" }, { "paragraph_id": 101, "text": "2019 Arab Barometer Survey:", "title": "Public opinion among Muslims" }, { "paragraph_id": 102, "text": "The coming together of \"human rights discourses and sexual orientation struggles\" has resulted in an abundance of \"social movements and organizations concerned with gender and sexual minority oppression and discrimination.\" Today, most LGBTQ-affirming Islamic organizations and individual congregations are primarily based in the Western world and South Asian countries; they usually identify themselves with the liberal and progressive movements within Islam.", "title": "LGBT-related movements within Islam" }, { "paragraph_id": 103, "text": "In France there was an Islamic same-sex marriage on February 18, 2012. In Paris in November 2012 a room in a Buddhist prayer hall was used by gay Muslims and called a \"gay-friendly mosque\", and a French Islamic website is supporting religious same-sex marriage. The Ibn Ruschd-Goethe mosque in Berlin is a liberal mosque open to all types of Muslims, where men and women pray together and LGBT worshippers are welcomed and supported. Other significant LGBT-inclusive mosques or prayer groups include the El-Tawhid Juma Circle Unity Mosque in Toronto, Masjid an-Nur al-Isslaah (Light of Reform Mosque) in Washington D.C., Masjid Al-Rabia in Chicago, Unity Mosque in Atlanta, People's Mosque in Cape Town South Africa, Masjid Ul-Umam mosque in Cape Town, Qal'bu Maryamin in California, and the Nur Ashki Jerrahi Sufi Community in New York City.", "title": "LGBT-related movements within Islam" }, { "paragraph_id": 104, "text": "Muslims for Progressive Values, based in the United States and Malaysia, is \"a faith-based, grassroots, human rights organization that embodies and advocates for the traditional Qur'anic values of social justice and equality for all, for the 21st Century.\" The Mecca Institute is an LGBT-inclusive and progressive online Islamic seminary, and serves as an online center of Islamic learning and research.", "title": "LGBT-related movements within Islam" }, { "paragraph_id": 105, "text": "The Al-Fatiha Foundation was an organization which tried to advance the cause of gay, lesbian, and transgender Muslims. It was founded in 1998 by Faisal Alam, a Pakistani American, and was registered as a nonprofit organization in the United States. The organization was an offshoot of an internet listserve that brought together many gay, lesbian and questioning Muslims from various countries.", "title": "LGBT-related movements within Islam" }, { "paragraph_id": 106, "text": "There are a number of Islamic ex-gay organizations, that is, those composed of people claiming to have experienced a basic change in sexual orientation from exclusive homosexuality to exclusive heterosexuality. These groups, like those based in socially conservative Christianity, are aimed at attempting to guide homosexuals towards heterosexuality. One of the leading LGBT reformatory Muslim organization is StraightWay Foundation, which was established in the United Kingdom in 2004 as an organization that provides information and advice for Muslims who struggle with homosexual attraction. They believe \"that through following God's guidance\", one may \"cease to be\" gay. They teach that the male-female pair is the \"basis for humanity's growth\" and that homosexual acts \"are forbidden by God\". NARTH has written favourably of the group. In 2004, Straightway entered into a controversy with the contemporary Mayor of London, Ken Livingstone, and the controversial Islamic cleric Yusuf al-Qaradawi. It was suggested that Livingstone was giving a platform to Islamic fundamentalists, and not liberal and progressive Muslims. Straightway responded to this by sending Livingstone a letter thanking him for his support of al-Qaradawi. Livingstone then ignited controversy when he thanked Straightway for the letter.", "title": "LGBT-related movements within Islam" }, { "paragraph_id": 107, "text": "Several anti-LGBT incidents have occurred:", "title": "LGBT-related movements within Islam" }, { "paragraph_id": 108, "text": "There are numbers of Muslim LGBT activists from different parts of the world. Some of them are listed below:", "title": "Muslim LGBT rights activists" }, { "paragraph_id": 109, "text": "In 2010, an anthology Islam and Homosexuality was published. In the Forward, Parvez Sharma sounded a pessimistic note about the future: \"In my lifetime I do not see Islam drafting a uniform edict that homosexuality is permissible.\" Following is material from two chapters dealing with the present:", "title": "In popular culture" }, { "paragraph_id": 110, "text": "Rusmir Musić in a chapter \"Queer Visions of Islam\" said that \"Queer Muslims struggle daily to reconcile their sexuality and their faith.\" Musić began to study in college \"whether or not my love for somebody of the same gender disgusts God and whether it will propel me to hell. The answer, for me, is an unequivocal no. Furthermore, Musić wrote, \"my research and reflection helped me to imagine my sexuality as a gift from a loving, not hateful, God.\"", "title": "In popular culture" }, { "paragraph_id": 111, "text": "Marhuq Fatima Khan in a chapter \"Queer, American, and Muslim: Cultivating Identities and Communities of Affirmation,\" says that \"Queer Muslims employ a few narratives to enable them to reconcile their religious and sexual identities.\" They \"fall into three broad categories: (1) God Is Merciful; (2) That Is Just Who I Am; and (3) It's Not Just Islam.\"", "title": "In popular culture" }, { "paragraph_id": 112, "text": "In his 2003 book Progressive Muslims: On Justice, Gender, and Pluralism, Professor Scott Siraj al-Haqq Kugle asserts \"that Islam does not address homosexuality.\" In Kugle's reading, the Quran holds \"a positive assessment of diversity.\" It \"respects diversity in physical appearance, constitution, stature, and color of human beings as a natural consequence of Divine wisdom in creation.\" Therefore, Islam can be described as \"a religion that positively assesses diversity in creation and in human societies.\" Furthermore, in Kugle's reading, the Quran \"implies that some people are different in their sexual desires than others.\" Thus, homosexuality can be seen as part of the \"natural diversity in sexuality in human societies.\" This is the way \"gay and lesbian Muslims\" view their homosexuality.", "title": "In popular culture" }, { "paragraph_id": 113, "text": "In addition to the Qur'an, Kugle refers to the benediction of Imam Al-Ghazali (the 11th-century Muslim theologian) which says \"praise be to God, the marvels of whose creation are not subject to the arrows of accident.\" For Kugle, this benediction implies that \"if sexuality is inherent in a person's personality, then sexual diversity is a part of creation, which is never accidental but is always marvelous.\" Kugle also refers to \"a rich archive of same-sex sexual desires and expressions, written by or reported about respected members of society: literati, educated elites, and religious scholars.\" Given these writings, Kugle concludes that \"one might consider Islamic societies (like classical Greece) to provide a vivid illustration of a 'homosexual-friendly' environment.\" This evoked from \"medieval and early modern Christian Europeans\" accusations that Muslim were \"engaging openly in same-sex practices.\"", "title": "In popular culture" }, { "paragraph_id": 114, "text": "Kugle goes a step further in his argument and asserts that \"if some Muslims find it necessary to deny that sexual diversity is part of the natural created world, then the burden of proof rests on their shoulders to illustrate their denial from the Qur'anic discourse itself.\"", "title": "In popular culture" }, { "paragraph_id": 115, "text": "Kecia Ali in her 2016 book Sexual Ethics and Islam says that \"there is no one Muslim perspective on anything.\" Regarding the Quran, Ali says that modern scholars disagree about what it says about \"same-sex intimacy.\" Some scholars argue that \"the Qur'an does not address homosexuality or homosexuals explicitly.\"", "title": "In popular culture" }, { "paragraph_id": 116, "text": "Regarding homosexuality, Ali says the belief that \"exclusively homosexual desire is innate in some individuals\" has been adopted \"even among some relatively conservative Western Muslim thinkers.\" 100 Homosexual Muslims believe their homosexuality to be innate and view \"their sexual orientation as God-given and immutable.\" She observes that \"queer and trans people are sometimes treated as defective or deviant,\" and adds that it is \"vital not to assume that variation implies imperfection or disability.\"", "title": "In popular culture" }, { "paragraph_id": 117, "text": "Regarding \"medieval Muslim culture,\" Ali says that \"male desire to penetrate desirable youth ... was perfectly normal.\" Even if same-sex relations were not lawful, there was \"an unwillingness to seek out and condemn instances of same-sex activity, but rather to let them pass by ... unpunished.\" Ali states that some scholars claim that Islamic societies were 'homosexual-friendly' in history.", "title": "In popular culture" }, { "paragraph_id": 118, "text": "In her article \"Same-sex Sexual Activity and Lesbian and Bisexual Women\", Ali elaborates on homosexuality as an aspect of medieval Muslim culture. She says that \"same-sex sexual expression has been a more or less recognized aspect of Muslim societies for many centuries.\" There are many explicit discussions of \"same-sex sexual activity\" in medieval Arabic literature. Ali states there is a lack of focus in medieval tradition on female same-sex sexual activity, where the Qur'an mainly focuses male/male sex. With female same-sex sexual activity there is more focus on the punishment for the acts and the complications with the dower, compared to men where there is a focus on punishment but also the need for ablutions and the effect of the act on possible marriage decisions.", "title": "In popular culture" } ]
Within the Muslim world, sentiment towards LGBT people varies and has varied between societies and individual Muslims, but is contemporarily quite negative. While colloquial, and in many cases, de facto official acceptance of at least some homosexual behavior was commonplace in pre-modern periods, later developments, starting from the 19th-century, have created a generally hostile environment for LGBT people. Most Muslim-majority countries have opposed moves to advance LGBT rights and recognition at the United Nations (UN), including within the UN General Assembly and the UN Human Rights Council. Meanwhile, contemporary Islamic jurisprudence generally accepts the possibility for transgender people (mukhannith/mutarajjilah) to change their gender status, but only after surgery, linking one's gender to biological markers. Trans people are nonetheless confronted with stigma, discrimination, intimidation, and harassment in many Muslim majority societies. Transgender identities are often considered under the gender-binary, although some pre-modern scholars had recognized effeminate men as a form of third gender, as long as their behaviour was naturally in contrast to their assigned gender at birth. There are differences between how the Qur'an and later hadith traditions treat homosexuality, with many Western scholars arguing that the latter is far more explicitly negative. Using these differences, these scholars have argued that Muhammad, the main Islamic prophet, never forbade homosexual relationships outright, although he disapproved of them in line with his contemporaries. There is, however, comparatively little evidence of homosexual practices being prevalent in Muslim societies for the first century and a half of Islamic history; male homosexual relationships were known of and discriminated against in Arabia, but were generally not met with legal sanctions. In later pre-modern periods, historical evidence of homosexual relationships are more common; and show de facto tolerance of these relationships. Historical records suggest that laws against homosexuality were invoked infrequently — mainly in cases of rape or other "exceptionally blatant infringement on public morals" as defined by Islamic law. This allowed themes of homoeroticism and pederasty to be cultivated in Islamic poetry and other Islamic literary genres, written in major languages of the Muslim world, from the 8th century CE into the modern era. The conceptions of homosexuality found in these texts resembled the traditions of ancient Greece and ancient Rome as opposed to the modern understanding of sexual orientation. In the modern era, Muslim public attitudes towards homosexuality underwent a marked change beginning in the 19th century, largely due to the global spread of Islamic fundamentalist movements, namely Salafism and Wahhabism. The Muslim world was also influenced by the sexual notions and restrictive norms that were prevalent in the Christian world at the time, particularly with regard to anti-homosexual legislation throughout European societies, most of which adhered to Christian law. As such, a number of Muslim-majority countries that were once colonies of European empires retain the criminal penalties that were originally implemented by European colonial authorities against those who were convicted of engaging in non-heterosexual acts. Therefore, modern Muslim homophobia is generally not thought to be a direct continuation of pre-modern mores, but a phenomenon that has been shaped by a variety of local and imported frameworks. As Western culture eventually moved towards secularism and thus enabled a platform for the flourishing of many LGBT movements, many Muslim fundamentalists came to associate the Western world with "ravaging moral decay" and rampant homosexuality. In contemporary society, prejudice, anti-LGBT discrimination and/or anti-LGBT violence — including within legal systems — persist in much of the Muslim world, exacerbated by socially conservative attitudes and the recent rise of Islamist ideologies in some countries; there are laws in place against homosexual activities in a larger number of Muslim-majority countries, with a number of them prescribing the death penalty for convicted offenders.
2002-01-03T20:59:53Z
2023-12-31T18:42:36Z
[ "Template:Lang", "Template:Citation", "Template:In lang", "Template:Lang-ar", "Template:Cn", "Template:Cite book", "Template:Cite news", "Template:Cite thesis", "Template:Refbegin", "Template:Cite web", "Template:Refend", "Template:Request quotation", "Template:Further", "Template:Legend", "Template:Sfn", "Template:Portal", "Template:Cite encyclopedia", "Template:Webarchive", "Template:Cite magazine", "Template:Short description", "Template:About", "Template:Pp", "Template:Main", "Template:Anchor", "Template:Verify source", "Template:Religion and LGBT people", "Template:Islam and other religions", "Template:Nowrap", "Template:Dead link", "Template:Rp", "Template:Cite journal", "Template:ISBN", "Template:Commons category", "Template:Blockquote", "Template:Wikiquote", "Template:Reflist", "Template:Bullet", "Template:Cbignore", "Template:Islam", "Template:When", "Template:Qref" ]
https://en.wikipedia.org/wiki/LGBT_people_and_Islam
15,474
Infanticide
Infanticide (or infant homicide) is the intentional killing of infants or offspring. Infanticide was a widespread practice throughout human history that was mainly used to dispose of unwanted children, its main purpose being the prevention of resources being spent on weak or disabled offspring. Unwanted infants were usually abandoned to die of exposure, but in some societies they were deliberately killed. Infanticide is broadly illegal, but in some places the practice is tolerated, or the prohibition is not strictly enforced. Most Stone Age human societies routinely practiced infanticide, and estimates of children killed by infanticide in the Mesolithic and Neolithic eras vary from 15 to 50 percent. Infanticide continued to be common in most societies after the historical era began, including ancient Greece, ancient Rome, the Phoenicians, ancient China, ancient Japan, Pre-Islamic Arabia, Aboriginal Australia, Native Americans, and Native Alaskans. Infanticide became forbidden in Europe and the Near East during the 1st millennium. Christianity forbade infanticide from its earliest times, which led Constantine the Great and Valentinian I to ban infanticide across the Roman Empire in the 4th century. Yet, infanticide was not unacceptable in some wars, and infanticide in Europe reached its peak during World War II (1939–45), during the Holocaust and the T4 Program. The practice ceased in Arabia in the 7th century after the founding of Islam, since the Quran prohibits infanticide. Infanticide of male babies had become uncommon in China by the Ming dynasty (1368–1644), whereas infanticide of female babies became more common during the One-Child Policy era (1979–2015). During the period of Company rule in India, the East India Company attempted to eliminate infanticide but were only partially successful, and female infanticide in some parts of India still continues. Infanticide is very rare in industrialised countries but may persist elsewhere. Parental infanticide researchers have found that mothers are more likely to commit infanticide. In the special case of neonaticide (murder in the first 24 hours of life), mothers account for almost all the perpetrators. Fatherly cases of neonaticide are so rare that they are individually recorded. The practice of infanticide has taken many forms over time. Child sacrifice to supernatural figures or forces, such as that believed to have been practiced in ancient Carthage, may be only the most notorious example in the ancient world. A frequent method of infanticide in ancient Europe and Asia was simply to abandon the infant, leaving it to die by exposure (i.e., hypothermia, hunger, thirst, or animal attack). On at least one island in Oceania, infanticide was carried out until the 20th century by suffocating the infant, while in pre-Columbian Mesoamerica and in the Inca Empire it was carried out by sacrifice (see below). Many Neolithic groups routinely resorted to infanticide in order to control their numbers so that their lands could support them. Joseph Birdsell believed that infanticide rates in prehistoric times were between 15% and 50% of the total number of births, while Laila Williamson estimated a lower rate ranging from 15% to 20%. Both anthropologists believed that these high rates of infanticide persisted until the development of agriculture during the Neolithic Revolution. A book published in 1981 stated that comparative anthropologists estimated that 50% of female newborn babies may have been killed by their parents during the Paleolithic era. The anthropologist Raymond Dart has interpreted fractures on the skulls of hominid infants (e.g. the Taung Child) as due to deliberate killing followed by cannibalism, but such explanations are by now considered uncertain and possibly wrong. Children were not necessarily actively killed, but neglect and intentional malnourishment may also have occurred, as proposed by Vicente Lull as an explanation for an apparent surplus of men and the below average height of women in prehistoric Menorca. Archaeologists have uncovered physical evidence of child sacrifice at several locations. Some of the best attested examples are the diverse rites which were part of the religious practices in Mesoamerica and the Inca Empire. Three thousand bones of young children, with evidence of sacrificial rituals, have been found in Sardinia. Pelasgians offered a sacrifice of every tenth child during difficult times. Many remains of children have been found in Gezer excavations with signs of sacrifice. Child skeletons with the marks of sacrifice have been found also in Egypt dating 950–720 BCE. In Carthage "[child] sacrifice in the ancient world reached its infamous zenith". Besides the Carthaginians, other Phoenicians, and the Canaanites, Moabites and Sepharvites offered their first-born as a sacrifice to their gods. In Egyptian households, at all social levels, children of both sexes were valued and there is no evidence of infanticide. The religion of the ancient Egyptians forbade infanticide and during the Greco-Roman period they rescued abandoned babies from manure heaps, a common method of infanticide by Greeks or Romans, and were allowed to either adopt them as foundling or raise them as slaves, often giving them names such as "copro -" to memorialize their rescue. Strabo considered it a peculiarity of the Egyptians that every child must be reared. Diodorus indicates infanticide was a punishable offence. Egypt was heavily dependent on the annual flooding of the Nile to irrigate the land and in years of low inundation, severe famine could occur with breakdowns in social order resulting, notably between 930–1070 CE and 1180–1350 CE. Instances of cannibalism are recorded during these periods, but it is unknown if this happened during the pharaonic era of ancient Egypt. Beatrix Midant-Reynes describes human sacrifice as having occurred at Abydos in the early dynastic period (c. 3150–2850 BCE), while Jan Assmann asserts there is no clear evidence of human sacrifice ever happening in ancient Egypt. According to Shelby Brown, Carthaginians, descendants of the Phoenicians, sacrificed infants to their gods. Charred bones of hundreds of infants have been found in Carthaginian archaeological sites. One such area harbored as many as 20,000 burial urns. Skeptics suggest that the bodies of children found in Carthaginian and Phoenician cemeteries were merely the cremated remains of children who died naturally. Plutarch (c. 46–120 CE) mentions the practice, as do Tertullian, Orosius, Diodorus Siculus and Philo. The Hebrew Bible also mentions what appears to be child sacrifice practiced at a place called the Tophet (from the Hebrew taph or toph, to burn) by the Canaanites. Writing in the 3rd century BCE, Kleitarchos, one of the historians of Alexander the Great, described that the infants rolled into the flaming pit. Diodorus Siculus wrote that babies were roasted to death inside the burning pit of the god Baal Hamon, a bronze statue. The historical Greeks considered the practice of adult and child sacrifice barbarous, however, infant exposure was widely practiced in ancient Greece. It was advocated by Aristotle in the case of congenital deformity: "As to the exposure of children, let there be a law that no deformed child shall live." In Greece, the decision to expose a child was typically the father's, although in Sparta the decision was made by a group of elders. Exposure was the preferred method of disposal, as that act in itself was not considered to be murder; moreover, the exposed child technically had a chance of being rescued by the gods or any passersby. This very situation was a recurring motif in Greek mythology. To notify the neighbors of a birth of a child, a woolen strip was hung over the front door to indicate a female baby and an olive branch to indicate a boy had been born. Families did not always keep their new child. After a woman had a baby, she would show it to her husband. If the husband accepted it, it would live, but if he refused it, it would die. Babies would often be rejected if they were illegitimate, unhealthy or deformed, the wrong sex, or too great a burden on the family. These babies would not be directly killed, but put in a clay pot or jar and deserted outside the front door or on the roadway. In ancient Greek religion, this practice took the responsibility away from the parents because the child would die of natural causes, for example, hunger, asphyxiation or exposure to the elements. The practice was prevalent in ancient Rome, as well. Philo was the first known philosopher to speak out against it. A letter from a Roman citizen to his sister, or a pregnant wife from her husband, dating from 1 BCE, demonstrates the casual nature with which infanticide was often viewed: In some periods of Roman history it was traditional for a newborn to be brought to the pater familias, the family patriarch, who would then decide whether the child was to be kept and raised, or left to die by exposure. The Twelve Tables of Roman law obliged him to put to death a child that was visibly deformed. The concurrent practices of slavery and infanticide contributed to the "background noise" of the crises during the Republic. Infanticide became a capital offense in Roman law in 374, but offenders were rarely if ever prosecuted. According to mythology, Romulus and Remus, twin infant sons of the war god Mars, survived near-infanticide after being tossed into the Tiber River. According to the myth, they were raised by wolves, and later founded the city of Rome. Whereas theologians and clerics preached sparing their lives, newborn abandonment continued as registered in both the literature record and in legal documents. According to William Lecky, exposure in the early Middle Ages, as distinct from other forms of infanticide, "was practiced on a gigantic scale with absolute impunity, noticed by writers with most frigid indifference and, at least in the case of destitute parents, considered a very venial offence". However the first foundling house in Europe was established in Milan in 787 on account of the high number of infanticides and out-of-wedlock births. The Hospital of the Holy Spirit in Rome was founded by Pope Innocent III because women were throwing their infants into the Tiber river. Unlike other European regions, in the Middle Ages the German mother had the right to expose the newborn. In the High Middle Ages, abandoning unwanted children finally eclipsed infanticide. Unwanted children were left at the door of church or abbey, and the clergy was assumed to take care of their upbringing. This practice also gave rise to the first orphanages. However, very high sex ratios were common in even late medieval Europe, which may indicate sex-selective infanticide. The Waldensians, a pre-Reformation medieval Christian sect deemed heretical by the Catholic Church, were accused of participating in infanticide. Judaism prohibits infanticide, and has for some time, dating back to at least early Common Era. Roman historians wrote about the ideas and customs of other peoples, which often diverged from their own. Tacitus recorded that the Jews "take thought to increase their numbers, for they regard it as a crime to kill any late-born children". Josephus, whose works give an important insight into 1st-century Judaism, wrote that God "forbids women to cause abortion of what is begotten, or to destroy it afterward". In his book Germania, Tacitus wrote in 98 CE that the ancient Germanic tribes enforced a similar prohibition. He found such mores remarkable and commented: "To restrain generation and the increase of children, is esteemed [by the Germans] an abominable sin, as also to kill infants newly born." It has become clear over the millennia, though, that Tacitus' description was inaccurate; the consensus of modern scholarship significantly differs. John Boswell believed that in ancient Germanic tribes unwanted children were exposed, usually in the forest. "It was the custom of the [Teutonic] pagans, that if they wanted to kill a son or daughter, they would be killed before they had been given any food." Usually children born out of wedlock were disposed of that way. In his highly influential Pre-historic Times, John Lubbock described burnt bones indicating the practice of child sacrifice in pagan Britain. The last canto, Marjatan poika (Son of Marjatta), of Finnish national epic Kalevala describes assumed infanticide. Väinämöinen orders the infant bastard son of Marjatta to be drowned in a marsh. The Íslendingabók, the main source for the early history of Iceland, recounts that on the Conversion of Iceland to Christianity in 1000 it was provided – in order to make the transition more palatable to Pagans – that "the old laws allowing exposure of newborn children will remain in force". However, this provision – among other concessions made at the time to the Pagans – was abolished some years later. Christianity explicitly rejects infanticide. The Teachings of the Apostles or Didache said "thou shalt not kill a child by abortion, neither shalt thou slay it when born". The Epistle of Barnabas stated an identical command, both thus conflating abortion and infanticide. Apologists Tertullian, Athenagoras, Minucius Felix, Justin Martyr and Lactantius also maintained that exposing a baby to death was a wicked act. In 318, Constantine I considered infanticide a crime, and in 374, Valentinian I mandated the rearing of all children (exposing babies, especially girls, was still common). The Council of Constantinople declared that infanticide was homicide, and in 589, the Third Council of Toledo took measures against the custom of killing their own children. Some Muslim sources allege that pre-Islamic Arabian society practiced infanticide as a form of "post-partum birth control". The word waʾd was used to describe the practice. These sources state that infanticide was practiced either out of destitution (thus practiced on males and females alike), or as "disappointment and fear of social disgrace felt by a father upon the birth of a daughter". Some authors believe that there is little evidence that infanticide was prevalent in pre-Islamic Arabia or early Muslim history, except for the case of the Tamim tribe, who practiced it during severe famine according to Islamic sources. Others state that "female infanticide was common all over Arabia during this period of time" (pre-Islamic Arabia), especially by burying alive a female newborn. A tablet discovered in Yemen, forbidding the people of a certain town from engaging in the practice, is the only written reference to infanticide within the peninsula in pre-Islamic times. Infanticide is explicitly prohibited by the Qur'an. "And do not kill your children for fear of poverty; We give them sustenance and yourselves too; surely to kill them is a great wrong." Together with polytheism and homicide, infanticide is regarded as a grave sin (see 6:151 and 60:12). Infanticide is also implicitly denounced in the story of Pharaoh's slaughter of the male children of Israelites (see 2:49; 7:127; 7:141; 14:6; 28:4; 40:25). Infanticide may have been practiced as human sacrifice, as part of the pagan cult of Perun. Ibn Fadlan describes sacrificial practices at the time of his trip to Kiev Rus (present-day Ukraine) in 921–922, and describes an incident of a woman voluntarily sacrificing her life as part of a funeral rite for a prominent leader, but makes no mention of infanticide. The Primary Chronicle, one of the most important literary sources before the 12th century, indicates that human sacrifice to idols may have been introduced by Vladimir the Great in 980. The same Vladimir the Great formally converted Kiev Rus into Christianity just 8 years later, but pagan cults continued to be practiced clandestinely in remote areas as late as the 13th century. American explorer George Kennan noted that among the Koryaks, a people of north-eastern Siberia, infanticide was still common in the nineteenth century. One of a pair of twins was always sacrificed. Infanticide (as a crime) gained both popular and bureaucratic significance in Victorian Britain. By the mid-19th century, in the context of criminal lunacy and the insanity defence, killing one's own child(ren) attracted ferocious debate, as the role of women in society was defined by motherhood, and it was thought that any woman who murdered her own child was by definition insane and could not be held responsible for her actions. Several cases were subsequently highlighted during the Royal Commission on Capital Punishment 1864–66, as a particular felony where an effective avoidance of the death penalty had informally begun. The New Poor Law Act of 1834 ended parish relief for unmarried mothers and allowed fathers of illegitimate children to avoid paying for "child support". Unmarried mothers then received little assistance, and the poor were left with the option of either entering the workhouse, turning to prostitution, resorting to infanticide, or choosing abortion. By the middle of the century infanticide was common for social reasons, such as illegitimacy, and the introduction of child life insurance additionally encouraged some women to kill their children for gain. Examples include Mary Ann Cotton, who murdered many of her 15 children as well as three husbands; Margaret Waters, the 'Brixton Baby Farmer', a professional baby-farmer who was found guilty of infanticide in 1870; Jessie King, who was hanged in 1889; Amelia Dyer, the 'Angel Maker', who murdered over 400 babies in her care; and Ada Chard-Williams, a baby farmer who was later hanged at Newgate prison. The Times reported that 67 infants were murdered in London in 1861 and 150 more recorded as "found dead", many of which were found on the streets. Another 250 were suffocated, half of them not recorded as accidental deaths. The report noted that "infancy in London has to creep into life in the midst of foes." Recording a birth as a still-birth was also another way of concealing infanticide because still-births did not need to be registered until 1926 and they did not need to be buried in public cemeteries. In 1895 The Sun (London) published the article, "Massacre of the Innocents", highlighting the dangers of baby-farming, the recording of stillbirths, and quoting Athelstan Braxton Hicks, the London coroner, on lying-in houses: I have not the slightest doubt that a large amount of crime is covered by the expression 'still-birth'. There are a large number of cases of what are called newly-born children, which are found all over England, more especially in London and large towns, abandoned in streets, rivers, on commons, and so on... [A] great deal of that crime is due to what are called lying-in houses, which are not registered, or under the supervision of that sort, where the people who act as midwives constantly, as soon as the child is born, either drop it into a pail of water or smother it with a damp cloth. It is a very common thing, also, to find that they bash their heads on the floor and break their skulls. The last British woman to be executed for infanticide of her own child was Rebecca Smith, who was hanged in Wiltshire in 1849. The Infant Life Protection Act of 1897 required local authorities to be notified within 48 hours of changes in custody or the death of children under seven years. Under the Children's Act of 1908 "no infant could be kept in a home that was so unfit and so overcrowded as to endanger its health, and no infant could be kept by an unfit nurse who threatened, by neglect or abuse, its proper care, and maintenance." As of the 3rd century BC, short of execution, the harshest penalties were imposed on practitioners of infanticide by the legal codes of the Qin dynasty and Han dynasty of ancient China. China's society practiced sex selective infanticide. Philosopher Han Fei Tzu, a member of the ruling aristocracy of the 3rd century BCE, who developed a school of law, wrote: "As to children, a father and mother when they produce a boy congratulate one another, but when they produce a girl they put it to death." Among the Hakka people, and in Yunnan, Anhui, Sichuan, Jiangxi and Fujian a method of killing the baby was to put her into a bucket of cold water, which was called "baby water". Infanticide was reported as early as the 3rd century BCE, and, by the time of the Song dynasty (960–1279 CE), it was widespread in some provinces. Belief in transmigration allowed poor residents of the country to kill their newborn children if they felt unable to care for them, hoping that they would be reborn in better circumstances. Furthermore, some Chinese did not consider newborn children fully "human" and saw "life" beginning at some point after the sixth month after birth. The Venetian explorer Marco Polo claimed to have seen newborns exposed in Manzi. Contemporary writers from the Song dynasty note that, in Hubei and Fujian provinces, residents would only keep three sons and two daughters (among poor farmers, two sons, and one daughter), and kill all babies beyond that number at birth. Initially the sex of the child was only one factor to consider. By the time of the Ming Dynasty, however (1368–1644), male infanticide was becoming increasingly uncommon. The prevalence of female infanticide remained high much longer. The magnitude of this practice is subject to some dispute; however, one commonly quoted estimate is that, by late Qing, between one fifth and one-quarter of all newborn girls, across the entire social spectrum, were victims of infanticide. If one includes excess mortality among female children under 10 (ascribed to gender-differential neglect), the share of victims rises to one third. Scottish physician John Dudgeon, who worked in Peking, China, during the early 20th century said that, "Infanticide does not prevail to the extent so generally believed among us, and in the north, it does not exist at all." Gender-selected abortion or sex identification (without medical uses), abandonment, and infanticide are illegal in present-day Mainland China. Nevertheless, the US State Department, and the human rights organization Amnesty International have all declared that Mainland China's family planning programs, called the one child policy (which has since changed to a two-child policy), contribute to infanticide. The sex gap between males and females aged 0–19 years old was estimated to be 25 million in 2010 by the United Nations Population Fund. But in some cases, in order to avoid Mainland China's family planning programs, parents will not report to government when a child is born (in most cases a girl), so she or he will not have an identity in the government and they can keep on giving birth until they are satisfied, without fines or punishment. In 2017, the government announced that all children without an identity can now have an identity legally, known as family register. Since feudal Edo era Japan the common slang for infanticide was mabiki (間引き), which means to pull plants from an overcrowded garden. A typical method in Japan was smothering the baby's mouth and nose with wet paper. It became common as a method of population control. Farmers would often kill their second or third sons. Daughters were usually spared, as they could be married off, sold off as servants or prostitutes, or sent off to become geishas. Mabiki persisted in the 19th century and early 20th century. To bear twins was perceived as barbarous and unlucky and efforts were made to hide or kill one or both twins. Female infanticide of newborn girls was systematic in feudatory Rajputs in South Asia for illegitimate female children during the Middle Ages. According to Firishta, as soon as the illegitimate female child was born she was held "in one hand, and a knife in the other, that any person who wanted a wife might take her now, otherwise she was immediately put to death". The practice of female infanticide was also common among the Kutch, Kehtri, Nagar, Bengal, Miazed, Kalowries and Sindh communities. It was not uncommon that parents threw a child to the sharks in the Ganges River as a sacrificial offering. The East India Company administration were unable to outlaw the custom until the beginning of the 19th century. According to social activists, female infanticide has remained a problem in India into the 21st century, with both NGOs and the government conducting awareness campaigns to combat it. In some African societies some neonates were killed because of beliefs in evil omens or because they were considered unlucky. Twins were usually put to death in Arebo; as well as by the Nama people of South West Africa; in the Lake Victoria Nyanza region; by the Tswana in Portuguese East Africa; in some parts of Igboland, Nigeria twins were sometimes abandoned in a forest at birth (as depicted in Things Fall Apart), oftentimes one twin was killed or hidden by midwives of wealthier mothers; and by the !Kung people of the Kalahari Desert. The Kikuyu, Kenya's most populous ethnic group, practiced ritual killing of twins. Infanticide is rooted in the old traditions and beliefs prevailing all over the country. A survey conducted by Disability Rights International found that 45% of women interviewed by them in Kenya were pressured to kill their children born with disabilities. The pressure is much higher in the rural areas, with every two mothers being forced out of three. An 1866 issue of The Australian News for Home Readers informed readers that "the crime of infanticide is so prevalent amongst the natives that it is rare to see an infant". Author Susanna de Vries in 2007 said that her accounts of Aboriginal violence, including infanticide, were censored by publishers in the 1980s and 1990s. She told reporters that the censorship "stemmed from guilt over the stolen children question". Keith Windschuttle weighed in on the conversation, saying this type of censorship started in the 1970s. In the same article Louis Nowra suggested that infanticide in customary Aboriginal law may have been because it was difficult to keep an abundant number of Aboriginal children alive; there were life-and-death decisions modern-day Australians no longer have to face. Liz Conor's 2016 work, Skin Deep: Settler Impressions of Aboriginal Women, a culmination of 10 years of research, found that stories about Aboriginal women were told through a colonial lens of racism and misogyny. Vague stories of infanticide and cannibalism were repeated as reliable facts, and sometimes originated in accounts told by members of rival tribes about the other. She also refers to Daisy Bates' now contested accounts of such practices, reproaching some historians for accepting them too uncritically. According to William D. Rubinstein, "Nineteenth-century European observers of Aboriginal life in South Australia and Victoria reported that about 30% of Aboriginal infants were killed at birth." In 1881 James Dawson wrote a passage about infanticide among Indigenous people in the western district of Victoria, which stated that "Twins are as common among them as among Europeans; but as food is occasionally very scarce, and a large family troublesome to move about, it is lawful and customary to destroy the weakest twin child, irrespective of sex. It is usual also to destroy those which are malformed." He also wrote "When a woman has children too rapidly for the convenience and necessities of the parents, she makes up her mind to let one be killed, and consults with her husband which it is to be. As the strength of a tribe depends more on males than females, the girls are generally sacrificed. The child is put to death and buried, or burned without ceremony; not, however, by its father or mother, but by relatives. No one wears mourning for it. Sickly children are never killed on account of their bad health, and are allowed to die naturally." In 1937, a Christian reverend in the Kimberley offered a "baby bonus" to Aboriginal families as a deterrent against infanticide and to increase the birthrate of the local Indigenous population. A Canberran journalist in 1927 wrote of the "cheapness of life" to the Aboriginal people local to the Canberra area 100 years before. "If drought or bush fires had devastated the country and curtailed food supplies, babies got a short shift. Ailing babies, too would not be kept", he wrote. A bishop wrote in 1928 that it was common for Aboriginal Australians to restrict the size of their tribal groups, including by infanticide, so that the food resources of the tribal area may be sufficient for them. Annette Hamilton, a professor of anthropology at Macquarie University, who carried out research in the Aboriginal community of Maningrida in Arnhem Land during the 1960s, wrote that prior to that time part-European babies born to Aboriginal mothers had not been allowed to live, and that "mixed-unions are frowned on by men and women alike as a matter of principle". There is no agreement about the actual estimates of the frequency of newborn female infanticide in the Inuit population. Carmel Schrire mentions diverse studies ranging from 15 to 50% to 80%. Polar Inuit (Inughuit) killed the child by throwing him or her into the sea. There is even a legend in Inuit mythology, "The Unwanted Child", where a mother throws her child into the fjord. The Yukon and the Mahlemuit tribes of Alaska exposed the female newborns by first stuffing their mouths with grass before leaving them to die. In Arctic Canada the Inuit exposed their babies on the ice and left them to die. Female Inuit infanticide disappeared in the 1930s and 1940s after contact with the Western cultures from the South. However, it must be acknowledged these infanticide claims came from non-Inuit observers, whose writings were later used to justify the forced westernization of indigenous peoples. Travis Hedwig argues that infanticide runs counter to cultural norms at the time and that researchers were misinterpreting the actions of an unfamiliar culture and people. The Handbook of North American Indians reports infanticide among the Dene Natives and those of the Mackenzie Mountains. In the Eastern Shoshone there was a scarcity of Native American women as a result of female infanticide. For the Maidu Native Americans twins were so dangerous that they not only killed them, but the mother as well. In the region known today as southern Texas, the Mariame Native Americans practiced infanticide of females on a large scale. Wives had to be obtained from neighboring groups. Bernal Díaz recounted that, after landing on the Veracruz coast, they came across a temple dedicated to Tezcatlipoca. "That day they had sacrificed two boys, cutting open their chests and offering their blood and hearts to that accursed idol". In The Conquest of New Spain Díaz describes more child sacrifices in the towns before the Spaniards reached the large Aztec city Tenochtitlan. Although academic data of infanticides among the indigenous people in South America is not as abundant as that of North America, the estimates seem to be similar. The Tapirapé indigenous people of Brazil allowed no more than three children per woman, and no more than two of the same sex. If the rule was broken infanticide was practiced. The Bororo killed all the newborns that did not appear healthy enough. Infanticide is also documented in the case of the Korubo people in the Amazon. The Yanomami men killed children while raiding enemy villages. Helena Valero, a Brazilian woman kidnapped by Yanomami warriors in the 1930s, witnessed a Karawetari raid on her tribe: They killed so many. I was weeping for fear and for pity but there was nothing I could do. They snatched the children from their mothers to kill them, while the others held the mothers tightly by the arms and wrists as they stood up in a line. All the women wept. ... The men began to kill the children; little ones, bigger ones, they killed many of them. While qhapaq hucha was practiced in the Peruvian large cities, child sacrifice in the pre-Columbian tribes of the region is less documented. However, even today studies on the Aymara Indians reveal high incidences of mortality among the newborn, especially female deaths, suggesting infanticide. The Abipones, a small tribe of Guaycuruan stock, of about 5,000 by the end of the 18th century in Paraguay, practiced systematic infanticide; with never more than two children being reared in one family. The Machigenga killed their disabled children. Infanticide among the Chaco in Paraguay was estimated as high as 50% of all newborns in that tribe, who were usually buried. The infanticidal custom had such roots among the Ayoreo in Bolivia and Paraguay that it persisted until the late 20th century. Infanticide has become less common in the Western world. The frequency has been estimated to be 1 in approximately 3000 to 5000 children of all ages and 2.1 per 100,000 newborns per year. It is thought that infanticide today continues at a much higher rate in areas of extremely high poverty and overpopulation, such as parts of India. Female infants, then and even now, are particularly vulnerable, a factor in sex-selective infanticide. Recent estimates suggest that over 100 million girls and women are 'missing' in Asia. In spite of the fact that it is illegal, in Benin, West Africa, parents secretly continue with infanticidal customs. There have been some accusations that infanticide occurs in Mainland China due to the one-child policy. In the 1990s, a certain stretch of the Yangtze River was known to be a common site of infanticide by drowning, until government projects made access to it more difficult. A study from 2012 suggests that over 40 million girls and women are missing in Mainland China (Klasen and Wink 2002). The practice has continued in some rural areas of India. India has the highest infanticide rate in the world, despite infanticide being illegal. According to a 2005 report by the United Nations Children's Fund (UNICEF) up to 50 million girls and women are missing in India's population as a result of systematic sex discrimination and sex selective abortions. Killings of newborn babies have been on the rise in Pakistan, corresponding to an increase in poverty across the country. More than 1,000 infants, mostly girls, were killed or abandoned to die in Pakistan in 2009 according to a Pakistani charity organization. The Edhi Foundation found 1,210 dead babies in 2010. Many more are abandoned and left at the doorsteps of mosques. As a result, Edhi centers feature signs "Do not murder, lay them here." Though female infanticide is punishable by life in prison, such crimes are rarely prosecuted. On November 28, 2008, The National, one of Papua New Guinea's two largest newspapers at the time, ran a story entitled "Male Babies Killed To Stop Fights" which claimed that in Agibu and Amosa villages of Gimi region of Eastern Highlands province of Papua New Guinea where tribal fighting in the region of Gimi has been going on since 1986 (many of the clashes arising over claims of sorcery) women had agreed that if they stopped producing males, allowing only female babies to survive, their tribe's stock of boys would go down and there would be no men in the future to fight. They had supposedly agreed to have all newborn male babies killed. It is not known how many male babies were supposedly killed by being smothered, but it had reportedly happened to all males over a 10-year period. However, this claim about male infanticide in Papua New Guinea was probably just the result of inaccurate and sensationalistic news reporting, because Salvation Army workers in the region of Gimi denied that the supposed male infanticide actually happened, and said that the tribal women were merely speaking hypothetically and hyperbolically about male infanticide at a peace and reconciliation workshop in order to make a point. The tribal women had never planned to actually kill their own sons. In England and Wales there were typically 30 to 50 homicides per million children less than 1 year old between 1982 and 1996. The younger the infant, the higher the risk. The rate for children 1 to 5 years was around 10 per million children. The homicide rate of infants less than 1 year is significantly higher than for the general population. In English law infanticide is established as a distinct offence by the Infanticide Acts. Defined as the killing of a child under 12 months of age by their mother, the effect of the Acts are to establish a partial defence to charges of murder. In the United States the infanticide rate during the first hour of life outside the womb dropped from 1.41 per 100,000 during 1963 to 1972 to 0.44 per 100,000 for 1974 to 1983; the rates during the first month after birth also declined, whereas those for older infants rose during this time. The legalization of abortion, which was completed in 1973, was the most important factor in the decline in neonatal mortality during the period from 1964 to 1977, according to a study by economists associated with the National Bureau of Economic Research. In Canada, 114 cases of infanticide by a parent were reported during 1964–1968. In Spain, far-right political party Vox has claimed that female perpetrators of infanticide outnumber male perpetrators of femicide. However, neither the Spanish National Statistics Institute nor the Ministry of the Interior keep data on the gender of perpetrators, but victims of femicide consistently number higher than victims of infanticide. From 2013 to March 2018, 28 infanticide cases perpetrated by 22 mothers and three stepmothers were reported in Spain. Intersex infants commonly suffer from infanticide particularly in developing countries, largely caused by stigma surrounding intersex conditions. Often intersex infants are abandoned, while others are actively killed. Many intersex individuals are forced to flee due to persecution and violence. Many intersex individuals commonly seek political asylum due to oppression according to the United Nations Human Rights Council. There are various reasons for infanticide. Neonaticide typically has different patterns and causes than for the killing of older infants. Traditional neonaticide is often related to economic necessity – the inability to provide for the infant. In the United Kingdom and the United States, older infants are typically killed for reasons related to child abuse, domestic violence or mental illness. For infants older than one day, younger infants are more at risk, and boys are more at risk than girls. Risk factors for the parent include: Family history of violence, violence in a current relationship, history of abuse or neglect of children, and personality disorder and/or depression. In the late 17th and early 18th centuries, "loopholes" were invented by some suicidal members of Lutheran churches who wanted to avoid the damnation that was promised by most Christian doctrine as a penalty of suicide. One famous example of someone who wished to end their life but avoid the eternity in hell was Christina Johansdotter (died 1740). She was a Swedish murderer who killed a child in Stockholm with the sole purpose of being executed. She is an example of those who seek suicide through execution by committing a murder. It was a common act, frequently targeting young children or infants as they were believed to be free from sin, thus believing to go "straight to heaven". Although mainstream Christian denominations, including Lutherans, view the murder of an innocent as being condemned in the Fifth Commandment, the suicidal members of Lutheran churches who deliberately killed children with the intent of getting executed were usually well aware of Christian doctrine against murder, and planned to repent and seek forgiveness of their sins afterwards. For example, in 18th century Denmark up until the year 1767, murderers were given the opportunity to repent of their sins before they were executed either way. In Denmark on the year of 1767, the religiously motivated suicidal murders finally ceased in that country with the abolishment of the death penalty. In 1888, Lieut. F. Elton reported that Ugi beach people in the Solomon Islands killed their infants at birth by burying them, and women were also said to practice abortion. They reported that it was too much trouble to raise a child, and instead preferred to buy one from the bush people. Many historians believe the reason to be primarily economic, with more children born than the family is prepared to support. In societies that are patrilineal and patrilocal, the family may choose to allow more sons to live and kill some daughters, as the former will support their birth family until they die, whereas the latter will leave economically and geographically to join their husband's family, possibly only after the payment of a burdensome dowry price. Thus the decision to bring up a boy is more economically rewarding to the parents. However, this does not explain why infanticide would occur equally among rich and poor, nor why it would be as frequent during decadent periods of the Roman Empire as during earlier, less affluent, periods. Before the appearance of effective contraception, infanticide was a common occurrence in ancient brothels. Unlike usual infanticide – where historically girls have been more likely to be killed – prostitutes in certain areas preferred to kill their male offspring. Instances of infanticide in Britain in 18th and 19th centuries are often attributed to the economic position of the women, with juries committing "pious perjury" in many subsequent murder cases. The knowledge of the difficulties faced in the 18th century by those women who attempted to keep their children can be seen as a reason for juries to show compassion. If the woman chose to keep the child, society was not set up to ease the pressure placed upon the woman, legally, socially or economically. In mid-18th century Britain there was assistance available for women who were not able to raise their children. The Foundling Hospital opened in 1756 and was able to take in some of the illegitimate children. However, the conditions within the hospital caused Parliament to withdraw funding and the governors to live off of their own incomes. This resulted in a stringent entrance policy, with the committee requiring that the hospital: Once a mother had admitted her child to the hospital, the hospital did all it could to ensure that the parent and child were not re-united. MacFarlane argues in Illegitimacy and Illegitimates in Britain (1980) that English society greatly concerned itself with the burden that a bastard child places upon its communities and had gone to some lengths to ensure that the father of the child is identified in order to maintain its well-being. Assistance could be gained through maintenance payments from the father, however, this was capped "at a miserable 2 s and 6 d a week". If the father fell behind with the payments he could only be asked "to pay a maximum of 13 weeks arrears". Despite the accusations of some that women were getting a free hand-out, there is evidence that many women were far from receiving adequate assistance from their parish. "Within Leeds in 1822 ... relief was limited to 1 s per week". Sheffield required women to enter the workhouse, whereas Halifax gave no relief to the women who required it. The prospect of entering the workhouse was certainly something to be avoided. Lionel Rose quotes Dr Joseph Rogers in Massacre of the Innocents ... (1986). Rogers, who was employed by a London workhouse in 1856 stated that conditions in the nursery were 'wretchedly damp and miserable ... [and] ... overcrowded with young mothers and their infants'. The loss of social standing for a servant girl was a particular problem in respect of producing a bastard child as they relied upon a good character reference in order to maintain their job and more importantly, to get a new or better job. In a large number of trials for the crime of infanticide, it is the servant girl that stood accused. The disadvantage of being a servant girl is that they had to live to the social standards of their superiors or risk dismissal and no references. Whereas within other professions, such as in the factory, the relationship between employer and employee was much more anonymous and the mother would be better able to make other provisions, such as employing a minder. The result of the lack of basic social care in Britain in the 18th and 19th century is the numerous accounts in court records of women, particularly servant girls, standing trial for the murder of their child. There may have been no specific offense of infanticide in England before about 1623 because infanticide was a matter for the by ecclesiastical courts, possibly because infant mortality from natural causes was high (about 15% or one in six). Thereafter the accusation of the suppression of bastard children by lewd mothers was a crime incurring the presumption of guilt. The Infanticide Acts are several laws. That of 1922 made the killing of an infant child by its mother during the early months of life as a lesser crime than murder. The acts of 1938 and 1939 abolished the earlier act, but introduced the idea that postpartum depression was legally to be regarded as a form of diminished responsibility. Marvin Harris estimated that among Paleolithic hunters 23–50% of newborn children were killed. He argued that the goal was to preserve the 0.001% population growth of that time. He also wrote that female infanticide may be a form of population control. Population control is achieved not only by limiting the number of potential mothers; increased fighting among men for access to relatively scarce wives would also lead to a decline in population. For example, on the Melanesian island of Tikopia infanticide was used to keep a stable population in line with its resource base. Research by Marvin Harris and William Divale supports this argument, it has been cited as an example of environmental determinism. Evolutionary psychology has proposed several theories for different forms of infanticide. Infanticide by stepfathers, as well as child abuse in general by stepfathers, has been explained by spending resources on not genetically related children reducing reproductive success (See the Cinderella effect and Infanticide (zoology)). Infanticide is one of the few forms of violence more often done by women than men. Cross-cultural research has found that this is more likely to occur when the child has deformities or illnesses as well as when there are lacking resources due to factors such as poverty, other children requiring resources, and no male support. Such a child may have a low chance of reproductive success in which case it would decrease the mother's inclusive fitness, in particular since women generally have a greater parental investment than men, to spend resources on the child. A minority of academics subscribe to an alternate school of thought, considering the practice as "early infanticidal childrearing". They attribute parental infanticidal wishes to massive projection or displacement of the parents' unconscious onto the child, because of intergenerational, ancestral abuse by their own parents. Clearly, an infanticidal parent may have multiple motivations, conflicts, emotions, and thoughts about their baby and their relationship with their baby, which are often colored both by their individual psychology, current relational context and attachment history, and, perhaps most saliently, their psychopathology Almeida, Merminod, and Schechter suggest that parents with fantasies, projections, and delusions involving infanticide need to be taken seriously and assessed carefully, whenever possible, by an interdisciplinary team that includes infant mental health specialists or mental health practitioners who have experience in working with parents, children, and families. In addition to debates over the morality of infanticide itself, there is some debate over the effects of infanticide on surviving children, and the effects of childrearing in societies that also sanction infanticide. Some argue that the practice of infanticide in any widespread form causes enormous psychological damage in children. Conversely, studying societies that practice infanticide Géza Róheim reported that even infanticidal mothers in New Guinea, who ate a child, did not affect the personality development of the surviving children; that "these are good mothers who eat their own children". Harris and Divale's work on the relationship between female infanticide and warfare suggests that there are, however, extensive negative effects. Postpartum psychosis is also a causative factor of infanticide. Stuart S. Asch, MD, a professor of psychiatry at Cornell University established the connections between some cases of infanticide and post-partum depression. The books, From Cradle to Grave, and The Death of Innocents, describe selected cases of maternal infanticide and the investigative research of Professor Asch working in concert with the New York City Medical Examiner's Office. Stanley Hopwood wrote that childbirth and lactation entail severe stress on the female sex, and that under certain circumstances attempts at infanticide and suicide are common. A study published in the American Journal of Psychiatry revealed that 44% of filicidal fathers had a diagnosis of psychosis. In addition to postpartum psychosis, dissociative psychopathology and sociopathy have also been found to be associated with neonaticide in some cases In addition, severe postpartum depression can lead to infanticide. Sex selection may be one of the contributing factors of infanticide. In the absence of sex-selective abortion, sex-selective infanticide can be deduced from very skewed birth statistics. The biologically normal sex ratio for humans at birth is approximately 105 males per 100 females; normal ratios hardly ranging beyond 102–108. When a society has an infant male to female ratio which is significantly higher or lower than the biological norm, and biased data can be ruled out, sex selection can usually be inferred. Intersex infants with ambiguous or atypical genitalia often suffer from infanticide. In New South Wales, infanticide is defined in Section 22A(1) of the Crimes Act 1900 (NSW) as follows: Where a woman by any willful act or omission causes the death of her child, being a child under the age of twelve months, but at the time of the act or omission the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child, then, notwithstanding that the circumstances were such that but for this section the offense would have amounted to murder, she shall be guilty of infanticide, and may for such offense be dealt with and punished as if she had been guilty of the offense of manslaughter of such child. Because Infanticide is punishable as manslaughter, as per s24, the maximum penalty for this offence is therefore 25 years imprisonment. In Victoria, infanticide is defined by Section 6 of the Crimes Act of 1958 with a maximum penalty of five years. In New Zealand, infanticide is provided for by Section 178 of the Crimes Act 1961 which states: Where a woman causes the death of any child of hers under the age of 10 years in a manner that amounts to culpable homicide, and where at the time of the offence the balance of her mind was disturbed, by reason of her not having fully recovered from the effect of giving birth to that or any other child, or by reason of the effect of lactation, or by reason of any disorder consequent upon childbirth or lactation, to such an extent that she should not be held fully responsible, she is guilty of infanticide, and not of murder or manslaughter, and is liable to imprisonment for a term not exceeding 3 years. In Canada, infanticide is a specific offence under section 237 of the Criminal Code. It is defined as a form of culpable homicide which is neither murder nor manslaughter, and occurs when "a female person... by a wilful act or omission... causes the death of her newly-born child [defined as a child under one year of age], if at the time of the act or omission she is not fully recovered from the effects of giving birth to the child and by reason thereof or of the effect of lactation consequent on the birth of the child her mind is then disturbed." Infanticide is also a defence to murder, in that a person accused of murder who successfully presents the defence is entitled to be convicted of infanticide rather than murder. The maximum sentence for infanticide is five years' imprisonment; by contrast, the maximum sentence for manslaughter is life, and the mandatory sentence for murder is life. The offence derives from an offence created in English law in 1922, which aimed to address the issue of judges and juries who were reluctant to return verdicts of murder against women and girls who killed their newborns out of poverty, depression, the shame of illegitimacy, or otherwise desperate circumstances, since the mandatory sentence was death (even though in those circumstances the death penalty was likely not to be carried out). With infanticide as a separate offence with a lesser penalty, convictions were more likely. The offence of infanticide was created in Canada in 1948. There is ongoing debate in the Canadian legal and political fields about whether section 237 of the Criminal Code should be amended or abolished altogether. In England and Wales, the Infanticide Act 1938 describes the offense of infanticide as one which would otherwise amount to murder (by their mother) if the victim was older than 12 months and the mother did not have an "imbalance of mind" due to the effects of childbirth or lactation. Where a mother who has killed such an infant has been charged with murder rather than infanticide s.1(3) of the Act confirms that a jury has the power to find alternative verdicts of Manslaughter in English law or guilty but insane. Infanticide is illegal in the Netherlands, although the maximum sentence is lower than for homicide. The Groningen Protocol regulates euthanasia for infants who are believed to "suffer hopelessly and unbearably" under strict conditions. Article 149 of the Penal Code of Poland stipulates that a mother who kills her child in labour, while under the influence of the course of the delivery, is punishable by imprisonment of three months to five years. Article 200 of the Penal Code of Romania stipulates that the killing of a newborn during the first 24 hours, by the mother who is in a state of mental distress, shall be punished with imprisonment of one to five years. The previous Romanian Penal Code also defined infanticide (pruncucidere) as a distinct criminal offense, providing for punishment of two to seven years imprisonment, recognizing the fact that a mother's judgment may be impaired immediately after birth but did not define the term "infant", and this had led to debates regarding the precise moment when infanticide becomes homicide. This issue was resolved by the new Penal Code, which came into force in 2014. While legislation regarding infanticide in some countries focuses on rehabilitation, believing that treatment and education will prevent repetitive action, the United States remains focused on delivering punishment. One justification for punishment is the difficulty of implementing rehabilitation services. With an overcrowded prison system, the United States can not provide the necessary treatment and services. In 2009, Texas state representative Jessica Farrar proposed legislation that would define infanticide as a distinct and lesser crime than homicide. Under the terms of the proposed legislation, if jurors concluded that a mother's "judgment was impaired as a result of the effects of giving birth or the effects of lactation following the birth," they would be allowed to convict her of the crime of infanticide, rather than murder. The maximum penalty for infanticide would be two years in prison. Farrar's introduction of this bill prompted liberal bioethics scholar Jacob M. Appel to call her "the bravest politician in America". The MOTHERS Act (Moms Opportunity To access Health, Education, Research and Support), precipitated by the death of a Chicago woman with postpartum psychosis was introduced in 2009. The act was ultimately incorporated into the Patient Protection and Affordable Care Act which passed in 2010. The act requires screening for postpartum mood disorders at any time of the adult lifespan as well as expands research on postpartum depression. Provisions of the act also authorize grants to support clinical services for women who have, or are at risk for, postpartum psychosis. Since infanticide, especially neonaticide, is often a response to an unwanted birth, preventing unwanted pregnancies through improved sex education and increased contraceptive access are advocated as ways of preventing infanticide. Increased use of contraceptives and access to safe legal abortions have greatly reduced neonaticide in many developed nations. Some say that where abortion is illegal, as in Pakistan, infanticide would decline if safer legal abortions were available. Cases of infanticide have also garnered increasing attention and interest from advocates for the mentally ill as well as organizations dedicated to postpartum disorders. Following the trial of Andrea Yates, a mother from the United States who garnered national attention for drowning her 5 children, representatives from organizations such as the Postpartum Support International and the Marcé Society for Treatment and Prevention of Postpartum Disorders began requesting clarification of diagnostic criteria for postpartum disorders and improved guidelines for treatments. While accounts of postpartum psychosis have dated back over 2,000 years ago, perinatal mental illness is still largely under-diagnosed despite postpartum psychosis affecting 1 to 2 per 1000 women. However, with clinical research continuing to demonstrate the large role of rapid neurochemical fluctuation in postpartum psychosis, prevention of infanticide points ever strongly towards psychiatric intervention. Screening for psychiatric disorders or risk factors, and providing treatment or assistance to those at risk may help prevent infanticide. Current diagnostic considerations include symptoms, psychological history, thoughts of self-harm or harming one's children, physical and neurological examination, laboratory testing, substance abuse, and brain imaging. As psychotic symptoms may fluctuate, it is important that diagnostic assessments cover a wide range of factors. While studies on the treatment of postpartum psychosis are scarce, a number of case and cohort studies have found evidence describing the effectiveness of lithium monotherapy for both acute and maintenance treatment of postpartum psychosis, with the majority of patients achieving complete remission. Adjunctive treatments include electroconvulsive therapy, antipsychotic medication, or benzodiazepines. Electroconvulsive therapy, in particular, is the primary treatment for patients with catatonia, severe agitation, and difficulties eating or drinking. Antidepressants should be avoided throughout the acute treatment of postpartum psychosis due to risk of worsening mood instability. Though screening and treatment may help prevent infanticide, in the developed world, significant proportions of neonaticides that are detected occur in young women who deny their pregnancy and avoid outside contacts, many of whom may have limited contact with these health care services. In some areas baby hatches or safe surrender sites, safe places for a mother to anonymously leave an infant, are offered, in part to reduce the rate of infanticide. In other places, like the United States, safe-haven laws allow mothers to anonymously give infants to designated officials; they are frequently located at hospitals and police and fire stations. Additionally, some countries in Europe have the laws of anonymous birth and confidential birth that allow mothers to give up an infant after birth. In anonymous birth, the mother does not attach her name to the birth certificate. In confidential birth, the mother registers her name and information, but the document containing her name is sealed until the child comes to age. Typically such babies are put up for adoption, or cared for in orphanages. Granting women employment raises their status and autonomy. Having a gainful employment can raise the perceived worth of females. This can lead to an increase in the number of women getting an education and a decrease in the number of female infanticide. As a result, the infant mortality rate will decrease and economic development will increase. The practice has been observed in many other species of the animal kingdom since it was first seriously studied by Yukimaru Sugiyama. These include from microscopic rotifers and insects, to fish, amphibians, birds and mammals, including primates such as chacma baboons. According to studies carried out by Kyoto University in primates, including certain types of gorillas and chimpanzees, several conditions favor the tendency to kill their offspring in some species (to be performed only by males), among them are: Nocturnal life, the absence of nest construction, the marked sexual dimorphism in which the male is much larger than the female, the mating in a specific season and the high period of lactation without resumption of the estrus state in the female. An instance in which a child born on an inauspicious day is to live or die according to the chance of being trampled by cattle (death being likely) is provided by Infanticide in Madagascar., painted by Henry Melville and engraved by J Redaway for Fisher's Drawing Room Scrap Book, 1838 with a poetical illustration and notes by Letitia Elizabeth Landon.
[ { "paragraph_id": 0, "text": "Infanticide (or infant homicide) is the intentional killing of infants or offspring. Infanticide was a widespread practice throughout human history that was mainly used to dispose of unwanted children, its main purpose being the prevention of resources being spent on weak or disabled offspring. Unwanted infants were usually abandoned to die of exposure, but in some societies they were deliberately killed. Infanticide is broadly illegal, but in some places the practice is tolerated, or the prohibition is not strictly enforced.", "title": "" }, { "paragraph_id": 1, "text": "Most Stone Age human societies routinely practiced infanticide, and estimates of children killed by infanticide in the Mesolithic and Neolithic eras vary from 15 to 50 percent. Infanticide continued to be common in most societies after the historical era began, including ancient Greece, ancient Rome, the Phoenicians, ancient China, ancient Japan, Pre-Islamic Arabia, Aboriginal Australia, Native Americans, and Native Alaskans.", "title": "" }, { "paragraph_id": 2, "text": "Infanticide became forbidden in Europe and the Near East during the 1st millennium. Christianity forbade infanticide from its earliest times, which led Constantine the Great and Valentinian I to ban infanticide across the Roman Empire in the 4th century. Yet, infanticide was not unacceptable in some wars, and infanticide in Europe reached its peak during World War II (1939–45), during the Holocaust and the T4 Program. The practice ceased in Arabia in the 7th century after the founding of Islam, since the Quran prohibits infanticide. Infanticide of male babies had become uncommon in China by the Ming dynasty (1368–1644), whereas infanticide of female babies became more common during the One-Child Policy era (1979–2015). During the period of Company rule in India, the East India Company attempted to eliminate infanticide but were only partially successful, and female infanticide in some parts of India still continues. Infanticide is very rare in industrialised countries but may persist elsewhere.", "title": "" }, { "paragraph_id": 3, "text": "Parental infanticide researchers have found that mothers are more likely to commit infanticide. In the special case of neonaticide (murder in the first 24 hours of life), mothers account for almost all the perpetrators. Fatherly cases of neonaticide are so rare that they are individually recorded.", "title": "" }, { "paragraph_id": 4, "text": "The practice of infanticide has taken many forms over time. Child sacrifice to supernatural figures or forces, such as that believed to have been practiced in ancient Carthage, may be only the most notorious example in the ancient world.", "title": "History" }, { "paragraph_id": 5, "text": "A frequent method of infanticide in ancient Europe and Asia was simply to abandon the infant, leaving it to die by exposure (i.e., hypothermia, hunger, thirst, or animal attack).", "title": "History" }, { "paragraph_id": 6, "text": "On at least one island in Oceania, infanticide was carried out until the 20th century by suffocating the infant, while in pre-Columbian Mesoamerica and in the Inca Empire it was carried out by sacrifice (see below).", "title": "History" }, { "paragraph_id": 7, "text": "Many Neolithic groups routinely resorted to infanticide in order to control their numbers so that their lands could support them. Joseph Birdsell believed that infanticide rates in prehistoric times were between 15% and 50% of the total number of births, while Laila Williamson estimated a lower rate ranging from 15% to 20%. Both anthropologists believed that these high rates of infanticide persisted until the development of agriculture during the Neolithic Revolution. A book published in 1981 stated that comparative anthropologists estimated that 50% of female newborn babies may have been killed by their parents during the Paleolithic era. The anthropologist Raymond Dart has interpreted fractures on the skulls of hominid infants (e.g. the Taung Child) as due to deliberate killing followed by cannibalism, but such explanations are by now considered uncertain and possibly wrong. Children were not necessarily actively killed, but neglect and intentional malnourishment may also have occurred, as proposed by Vicente Lull as an explanation for an apparent surplus of men and the below average height of women in prehistoric Menorca.", "title": "History" }, { "paragraph_id": 8, "text": "Archaeologists have uncovered physical evidence of child sacrifice at several locations. Some of the best attested examples are the diverse rites which were part of the religious practices in Mesoamerica and the Inca Empire.", "title": "History" }, { "paragraph_id": 9, "text": "Three thousand bones of young children, with evidence of sacrificial rituals, have been found in Sardinia. Pelasgians offered a sacrifice of every tenth child during difficult times. Many remains of children have been found in Gezer excavations with signs of sacrifice. Child skeletons with the marks of sacrifice have been found also in Egypt dating 950–720 BCE. In Carthage \"[child] sacrifice in the ancient world reached its infamous zenith\". Besides the Carthaginians, other Phoenicians, and the Canaanites, Moabites and Sepharvites offered their first-born as a sacrifice to their gods.", "title": "History" }, { "paragraph_id": 10, "text": "In Egyptian households, at all social levels, children of both sexes were valued and there is no evidence of infanticide. The religion of the ancient Egyptians forbade infanticide and during the Greco-Roman period they rescued abandoned babies from manure heaps, a common method of infanticide by Greeks or Romans, and were allowed to either adopt them as foundling or raise them as slaves, often giving them names such as \"copro -\" to memorialize their rescue. Strabo considered it a peculiarity of the Egyptians that every child must be reared. Diodorus indicates infanticide was a punishable offence. Egypt was heavily dependent on the annual flooding of the Nile to irrigate the land and in years of low inundation, severe famine could occur with breakdowns in social order resulting, notably between 930–1070 CE and 1180–1350 CE. Instances of cannibalism are recorded during these periods, but it is unknown if this happened during the pharaonic era of ancient Egypt. Beatrix Midant-Reynes describes human sacrifice as having occurred at Abydos in the early dynastic period (c. 3150–2850 BCE), while Jan Assmann asserts there is no clear evidence of human sacrifice ever happening in ancient Egypt.", "title": "History" }, { "paragraph_id": 11, "text": "According to Shelby Brown, Carthaginians, descendants of the Phoenicians, sacrificed infants to their gods. Charred bones of hundreds of infants have been found in Carthaginian archaeological sites. One such area harbored as many as 20,000 burial urns. Skeptics suggest that the bodies of children found in Carthaginian and Phoenician cemeteries were merely the cremated remains of children who died naturally.", "title": "History" }, { "paragraph_id": 12, "text": "Plutarch (c. 46–120 CE) mentions the practice, as do Tertullian, Orosius, Diodorus Siculus and Philo. The Hebrew Bible also mentions what appears to be child sacrifice practiced at a place called the Tophet (from the Hebrew taph or toph, to burn) by the Canaanites. Writing in the 3rd century BCE, Kleitarchos, one of the historians of Alexander the Great, described that the infants rolled into the flaming pit. Diodorus Siculus wrote that babies were roasted to death inside the burning pit of the god Baal Hamon, a bronze statue.", "title": "History" }, { "paragraph_id": 13, "text": "The historical Greeks considered the practice of adult and child sacrifice barbarous, however, infant exposure was widely practiced in ancient Greece. It was advocated by Aristotle in the case of congenital deformity: \"As to the exposure of children, let there be a law that no deformed child shall live.\" In Greece, the decision to expose a child was typically the father's, although in Sparta the decision was made by a group of elders. Exposure was the preferred method of disposal, as that act in itself was not considered to be murder; moreover, the exposed child technically had a chance of being rescued by the gods or any passersby. This very situation was a recurring motif in Greek mythology. To notify the neighbors of a birth of a child, a woolen strip was hung over the front door to indicate a female baby and an olive branch to indicate a boy had been born. Families did not always keep their new child. After a woman had a baby, she would show it to her husband. If the husband accepted it, it would live, but if he refused it, it would die. Babies would often be rejected if they were illegitimate, unhealthy or deformed, the wrong sex, or too great a burden on the family. These babies would not be directly killed, but put in a clay pot or jar and deserted outside the front door or on the roadway. In ancient Greek religion, this practice took the responsibility away from the parents because the child would die of natural causes, for example, hunger, asphyxiation or exposure to the elements.", "title": "History" }, { "paragraph_id": 14, "text": "The practice was prevalent in ancient Rome, as well. Philo was the first known philosopher to speak out against it. A letter from a Roman citizen to his sister, or a pregnant wife from her husband, dating from 1 BCE, demonstrates the casual nature with which infanticide was often viewed:", "title": "History" }, { "paragraph_id": 15, "text": "In some periods of Roman history it was traditional for a newborn to be brought to the pater familias, the family patriarch, who would then decide whether the child was to be kept and raised, or left to die by exposure. The Twelve Tables of Roman law obliged him to put to death a child that was visibly deformed. The concurrent practices of slavery and infanticide contributed to the \"background noise\" of the crises during the Republic.", "title": "History" }, { "paragraph_id": 16, "text": "Infanticide became a capital offense in Roman law in 374, but offenders were rarely if ever prosecuted.", "title": "History" }, { "paragraph_id": 17, "text": "According to mythology, Romulus and Remus, twin infant sons of the war god Mars, survived near-infanticide after being tossed into the Tiber River. According to the myth, they were raised by wolves, and later founded the city of Rome.", "title": "History" }, { "paragraph_id": 18, "text": "Whereas theologians and clerics preached sparing their lives, newborn abandonment continued as registered in both the literature record and in legal documents. According to William Lecky, exposure in the early Middle Ages, as distinct from other forms of infanticide, \"was practiced on a gigantic scale with absolute impunity, noticed by writers with most frigid indifference and, at least in the case of destitute parents, considered a very venial offence\". However the first foundling house in Europe was established in Milan in 787 on account of the high number of infanticides and out-of-wedlock births. The Hospital of the Holy Spirit in Rome was founded by Pope Innocent III because women were throwing their infants into the Tiber river.", "title": "History" }, { "paragraph_id": 19, "text": "Unlike other European regions, in the Middle Ages the German mother had the right to expose the newborn.", "title": "History" }, { "paragraph_id": 20, "text": "In the High Middle Ages, abandoning unwanted children finally eclipsed infanticide. Unwanted children were left at the door of church or abbey, and the clergy was assumed to take care of their upbringing. This practice also gave rise to the first orphanages.", "title": "History" }, { "paragraph_id": 21, "text": "However, very high sex ratios were common in even late medieval Europe, which may indicate sex-selective infanticide. The Waldensians, a pre-Reformation medieval Christian sect deemed heretical by the Catholic Church, were accused of participating in infanticide.", "title": "History" }, { "paragraph_id": 22, "text": "Judaism prohibits infanticide, and has for some time, dating back to at least early Common Era. Roman historians wrote about the ideas and customs of other peoples, which often diverged from their own. Tacitus recorded that the Jews \"take thought to increase their numbers, for they regard it as a crime to kill any late-born children\". Josephus, whose works give an important insight into 1st-century Judaism, wrote that God \"forbids women to cause abortion of what is begotten, or to destroy it afterward\".", "title": "History" }, { "paragraph_id": 23, "text": "In his book Germania, Tacitus wrote in 98 CE that the ancient Germanic tribes enforced a similar prohibition. He found such mores remarkable and commented: \"To restrain generation and the increase of children, is esteemed [by the Germans] an abominable sin, as also to kill infants newly born.\" It has become clear over the millennia, though, that Tacitus' description was inaccurate; the consensus of modern scholarship significantly differs. John Boswell believed that in ancient Germanic tribes unwanted children were exposed, usually in the forest. \"It was the custom of the [Teutonic] pagans, that if they wanted to kill a son or daughter, they would be killed before they had been given any food.\" Usually children born out of wedlock were disposed of that way.", "title": "History" }, { "paragraph_id": 24, "text": "In his highly influential Pre-historic Times, John Lubbock described burnt bones indicating the practice of child sacrifice in pagan Britain.", "title": "History" }, { "paragraph_id": 25, "text": "The last canto, Marjatan poika (Son of Marjatta), of Finnish national epic Kalevala describes assumed infanticide. Väinämöinen orders the infant bastard son of Marjatta to be drowned in a marsh.", "title": "History" }, { "paragraph_id": 26, "text": "The Íslendingabók, the main source for the early history of Iceland, recounts that on the Conversion of Iceland to Christianity in 1000 it was provided – in order to make the transition more palatable to Pagans – that \"the old laws allowing exposure of newborn children will remain in force\". However, this provision – among other concessions made at the time to the Pagans – was abolished some years later.", "title": "History" }, { "paragraph_id": 27, "text": "Christianity explicitly rejects infanticide. The Teachings of the Apostles or Didache said \"thou shalt not kill a child by abortion, neither shalt thou slay it when born\". The Epistle of Barnabas stated an identical command, both thus conflating abortion and infanticide. Apologists Tertullian, Athenagoras, Minucius Felix, Justin Martyr and Lactantius also maintained that exposing a baby to death was a wicked act. In 318, Constantine I considered infanticide a crime, and in 374, Valentinian I mandated the rearing of all children (exposing babies, especially girls, was still common). The Council of Constantinople declared that infanticide was homicide, and in 589, the Third Council of Toledo took measures against the custom of killing their own children.", "title": "History" }, { "paragraph_id": 28, "text": "Some Muslim sources allege that pre-Islamic Arabian society practiced infanticide as a form of \"post-partum birth control\". The word waʾd was used to describe the practice. These sources state that infanticide was practiced either out of destitution (thus practiced on males and females alike), or as \"disappointment and fear of social disgrace felt by a father upon the birth of a daughter\".", "title": "History" }, { "paragraph_id": 29, "text": "Some authors believe that there is little evidence that infanticide was prevalent in pre-Islamic Arabia or early Muslim history, except for the case of the Tamim tribe, who practiced it during severe famine according to Islamic sources. Others state that \"female infanticide was common all over Arabia during this period of time\" (pre-Islamic Arabia), especially by burying alive a female newborn. A tablet discovered in Yemen, forbidding the people of a certain town from engaging in the practice, is the only written reference to infanticide within the peninsula in pre-Islamic times.", "title": "History" }, { "paragraph_id": 30, "text": "Infanticide is explicitly prohibited by the Qur'an. \"And do not kill your children for fear of poverty; We give them sustenance and yourselves too; surely to kill them is a great wrong.\" Together with polytheism and homicide, infanticide is regarded as a grave sin (see 6:151 and 60:12). Infanticide is also implicitly denounced in the story of Pharaoh's slaughter of the male children of Israelites (see 2:49; 7:127; 7:141; 14:6; 28:4; 40:25).", "title": "History" }, { "paragraph_id": 31, "text": "Infanticide may have been practiced as human sacrifice, as part of the pagan cult of Perun. Ibn Fadlan describes sacrificial practices at the time of his trip to Kiev Rus (present-day Ukraine) in 921–922, and describes an incident of a woman voluntarily sacrificing her life as part of a funeral rite for a prominent leader, but makes no mention of infanticide. The Primary Chronicle, one of the most important literary sources before the 12th century, indicates that human sacrifice to idols may have been introduced by Vladimir the Great in 980. The same Vladimir the Great formally converted Kiev Rus into Christianity just 8 years later, but pagan cults continued to be practiced clandestinely in remote areas as late as the 13th century.", "title": "History" }, { "paragraph_id": 32, "text": "American explorer George Kennan noted that among the Koryaks, a people of north-eastern Siberia, infanticide was still common in the nineteenth century. One of a pair of twins was always sacrificed.", "title": "History" }, { "paragraph_id": 33, "text": "Infanticide (as a crime) gained both popular and bureaucratic significance in Victorian Britain. By the mid-19th century, in the context of criminal lunacy and the insanity defence, killing one's own child(ren) attracted ferocious debate, as the role of women in society was defined by motherhood, and it was thought that any woman who murdered her own child was by definition insane and could not be held responsible for her actions. Several cases were subsequently highlighted during the Royal Commission on Capital Punishment 1864–66, as a particular felony where an effective avoidance of the death penalty had informally begun.", "title": "History" }, { "paragraph_id": 34, "text": "The New Poor Law Act of 1834 ended parish relief for unmarried mothers and allowed fathers of illegitimate children to avoid paying for \"child support\". Unmarried mothers then received little assistance, and the poor were left with the option of either entering the workhouse, turning to prostitution, resorting to infanticide, or choosing abortion. By the middle of the century infanticide was common for social reasons, such as illegitimacy, and the introduction of child life insurance additionally encouraged some women to kill their children for gain. Examples include Mary Ann Cotton, who murdered many of her 15 children as well as three husbands; Margaret Waters, the 'Brixton Baby Farmer', a professional baby-farmer who was found guilty of infanticide in 1870; Jessie King, who was hanged in 1889; Amelia Dyer, the 'Angel Maker', who murdered over 400 babies in her care; and Ada Chard-Williams, a baby farmer who was later hanged at Newgate prison.", "title": "History" }, { "paragraph_id": 35, "text": "The Times reported that 67 infants were murdered in London in 1861 and 150 more recorded as \"found dead\", many of which were found on the streets. Another 250 were suffocated, half of them not recorded as accidental deaths. The report noted that \"infancy in London has to creep into life in the midst of foes.\"", "title": "History" }, { "paragraph_id": 36, "text": "Recording a birth as a still-birth was also another way of concealing infanticide because still-births did not need to be registered until 1926 and they did not need to be buried in public cemeteries. In 1895 The Sun (London) published the article, \"Massacre of the Innocents\", highlighting the dangers of baby-farming, the recording of stillbirths, and quoting Athelstan Braxton Hicks, the London coroner, on lying-in houses:", "title": "History" }, { "paragraph_id": 37, "text": "I have not the slightest doubt that a large amount of crime is covered by the expression 'still-birth'. There are a large number of cases of what are called newly-born children, which are found all over England, more especially in London and large towns, abandoned in streets, rivers, on commons, and so on... [A] great deal of that crime is due to what are called lying-in houses, which are not registered, or under the supervision of that sort, where the people who act as midwives constantly, as soon as the child is born, either drop it into a pail of water or smother it with a damp cloth. It is a very common thing, also, to find that they bash their heads on the floor and break their skulls.", "title": "History" }, { "paragraph_id": 38, "text": "The last British woman to be executed for infanticide of her own child was Rebecca Smith, who was hanged in Wiltshire in 1849.", "title": "History" }, { "paragraph_id": 39, "text": "The Infant Life Protection Act of 1897 required local authorities to be notified within 48 hours of changes in custody or the death of children under seven years. Under the Children's Act of 1908 \"no infant could be kept in a home that was so unfit and so overcrowded as to endanger its health, and no infant could be kept by an unfit nurse who threatened, by neglect or abuse, its proper care, and maintenance.\"", "title": "History" }, { "paragraph_id": 40, "text": "As of the 3rd century BC, short of execution, the harshest penalties were imposed on practitioners of infanticide by the legal codes of the Qin dynasty and Han dynasty of ancient China.", "title": "History" }, { "paragraph_id": 41, "text": "China's society practiced sex selective infanticide. Philosopher Han Fei Tzu, a member of the ruling aristocracy of the 3rd century BCE, who developed a school of law, wrote: \"As to children, a father and mother when they produce a boy congratulate one another, but when they produce a girl they put it to death.\" Among the Hakka people, and in Yunnan, Anhui, Sichuan, Jiangxi and Fujian a method of killing the baby was to put her into a bucket of cold water, which was called \"baby water\".", "title": "History" }, { "paragraph_id": 42, "text": "Infanticide was reported as early as the 3rd century BCE, and, by the time of the Song dynasty (960–1279 CE), it was widespread in some provinces. Belief in transmigration allowed poor residents of the country to kill their newborn children if they felt unable to care for them, hoping that they would be reborn in better circumstances. Furthermore, some Chinese did not consider newborn children fully \"human\" and saw \"life\" beginning at some point after the sixth month after birth.", "title": "History" }, { "paragraph_id": 43, "text": "The Venetian explorer Marco Polo claimed to have seen newborns exposed in Manzi. Contemporary writers from the Song dynasty note that, in Hubei and Fujian provinces, residents would only keep three sons and two daughters (among poor farmers, two sons, and one daughter), and kill all babies beyond that number at birth. Initially the sex of the child was only one factor to consider. By the time of the Ming Dynasty, however (1368–1644), male infanticide was becoming increasingly uncommon. The prevalence of female infanticide remained high much longer. The magnitude of this practice is subject to some dispute; however, one commonly quoted estimate is that, by late Qing, between one fifth and one-quarter of all newborn girls, across the entire social spectrum, were victims of infanticide. If one includes excess mortality among female children under 10 (ascribed to gender-differential neglect), the share of victims rises to one third.", "title": "History" }, { "paragraph_id": 44, "text": "Scottish physician John Dudgeon, who worked in Peking, China, during the early 20th century said that, \"Infanticide does not prevail to the extent so generally believed among us, and in the north, it does not exist at all.\"", "title": "History" }, { "paragraph_id": 45, "text": "Gender-selected abortion or sex identification (without medical uses), abandonment, and infanticide are illegal in present-day Mainland China. Nevertheless, the US State Department, and the human rights organization Amnesty International have all declared that Mainland China's family planning programs, called the one child policy (which has since changed to a two-child policy), contribute to infanticide. The sex gap between males and females aged 0–19 years old was estimated to be 25 million in 2010 by the United Nations Population Fund. But in some cases, in order to avoid Mainland China's family planning programs, parents will not report to government when a child is born (in most cases a girl), so she or he will not have an identity in the government and they can keep on giving birth until they are satisfied, without fines or punishment. In 2017, the government announced that all children without an identity can now have an identity legally, known as family register.", "title": "History" }, { "paragraph_id": 46, "text": "Since feudal Edo era Japan the common slang for infanticide was mabiki (間引き), which means to pull plants from an overcrowded garden. A typical method in Japan was smothering the baby's mouth and nose with wet paper. It became common as a method of population control. Farmers would often kill their second or third sons. Daughters were usually spared, as they could be married off, sold off as servants or prostitutes, or sent off to become geishas. Mabiki persisted in the 19th century and early 20th century. To bear twins was perceived as barbarous and unlucky and efforts were made to hide or kill one or both twins.", "title": "History" }, { "paragraph_id": 47, "text": "Female infanticide of newborn girls was systematic in feudatory Rajputs in South Asia for illegitimate female children during the Middle Ages. According to Firishta, as soon as the illegitimate female child was born she was held \"in one hand, and a knife in the other, that any person who wanted a wife might take her now, otherwise she was immediately put to death\". The practice of female infanticide was also common among the Kutch, Kehtri, Nagar, Bengal, Miazed, Kalowries and Sindh communities.", "title": "History" }, { "paragraph_id": 48, "text": "It was not uncommon that parents threw a child to the sharks in the Ganges River as a sacrificial offering. The East India Company administration were unable to outlaw the custom until the beginning of the 19th century.", "title": "History" }, { "paragraph_id": 49, "text": "According to social activists, female infanticide has remained a problem in India into the 21st century, with both NGOs and the government conducting awareness campaigns to combat it.", "title": "History" }, { "paragraph_id": 50, "text": "In some African societies some neonates were killed because of beliefs in evil omens or because they were considered unlucky. Twins were usually put to death in Arebo; as well as by the Nama people of South West Africa; in the Lake Victoria Nyanza region; by the Tswana in Portuguese East Africa; in some parts of Igboland, Nigeria twins were sometimes abandoned in a forest at birth (as depicted in Things Fall Apart), oftentimes one twin was killed or hidden by midwives of wealthier mothers; and by the !Kung people of the Kalahari Desert. The Kikuyu, Kenya's most populous ethnic group, practiced ritual killing of twins.", "title": "History" }, { "paragraph_id": 51, "text": "Infanticide is rooted in the old traditions and beliefs prevailing all over the country. A survey conducted by Disability Rights International found that 45% of women interviewed by them in Kenya were pressured to kill their children born with disabilities. The pressure is much higher in the rural areas, with every two mothers being forced out of three.", "title": "History" }, { "paragraph_id": 52, "text": "An 1866 issue of The Australian News for Home Readers informed readers that \"the crime of infanticide is so prevalent amongst the natives that it is rare to see an infant\".", "title": "History" }, { "paragraph_id": 53, "text": "Author Susanna de Vries in 2007 said that her accounts of Aboriginal violence, including infanticide, were censored by publishers in the 1980s and 1990s. She told reporters that the censorship \"stemmed from guilt over the stolen children question\". Keith Windschuttle weighed in on the conversation, saying this type of censorship started in the 1970s. In the same article Louis Nowra suggested that infanticide in customary Aboriginal law may have been because it was difficult to keep an abundant number of Aboriginal children alive; there were life-and-death decisions modern-day Australians no longer have to face.", "title": "History" }, { "paragraph_id": 54, "text": "Liz Conor's 2016 work, Skin Deep: Settler Impressions of Aboriginal Women, a culmination of 10 years of research, found that stories about Aboriginal women were told through a colonial lens of racism and misogyny. Vague stories of infanticide and cannibalism were repeated as reliable facts, and sometimes originated in accounts told by members of rival tribes about the other. She also refers to Daisy Bates' now contested accounts of such practices, reproaching some historians for accepting them too uncritically.", "title": "History" }, { "paragraph_id": 55, "text": "According to William D. Rubinstein, \"Nineteenth-century European observers of Aboriginal life in South Australia and Victoria reported that about 30% of Aboriginal infants were killed at birth.\"", "title": "History" }, { "paragraph_id": 56, "text": "In 1881 James Dawson wrote a passage about infanticide among Indigenous people in the western district of Victoria, which stated that \"Twins are as common among them as among Europeans; but as food is occasionally very scarce, and a large family troublesome to move about, it is lawful and customary to destroy the weakest twin child, irrespective of sex. It is usual also to destroy those which are malformed.\"", "title": "History" }, { "paragraph_id": 57, "text": "He also wrote \"When a woman has children too rapidly for the convenience and necessities of the parents, she makes up her mind to let one be killed, and consults with her husband which it is to be. As the strength of a tribe depends more on males than females, the girls are generally sacrificed. The child is put to death and buried, or burned without ceremony; not, however, by its father or mother, but by relatives. No one wears mourning for it. Sickly children are never killed on account of their bad health, and are allowed to die naturally.\"", "title": "History" }, { "paragraph_id": 58, "text": "In 1937, a Christian reverend in the Kimberley offered a \"baby bonus\" to Aboriginal families as a deterrent against infanticide and to increase the birthrate of the local Indigenous population.", "title": "History" }, { "paragraph_id": 59, "text": "A Canberran journalist in 1927 wrote of the \"cheapness of life\" to the Aboriginal people local to the Canberra area 100 years before. \"If drought or bush fires had devastated the country and curtailed food supplies, babies got a short shift. Ailing babies, too would not be kept\", he wrote.", "title": "History" }, { "paragraph_id": 60, "text": "A bishop wrote in 1928 that it was common for Aboriginal Australians to restrict the size of their tribal groups, including by infanticide, so that the food resources of the tribal area may be sufficient for them.", "title": "History" }, { "paragraph_id": 61, "text": "Annette Hamilton, a professor of anthropology at Macquarie University, who carried out research in the Aboriginal community of Maningrida in Arnhem Land during the 1960s, wrote that prior to that time part-European babies born to Aboriginal mothers had not been allowed to live, and that \"mixed-unions are frowned on by men and women alike as a matter of principle\".", "title": "History" }, { "paragraph_id": 62, "text": "There is no agreement about the actual estimates of the frequency of newborn female infanticide in the Inuit population. Carmel Schrire mentions diverse studies ranging from 15 to 50% to 80%.", "title": "History" }, { "paragraph_id": 63, "text": "Polar Inuit (Inughuit) killed the child by throwing him or her into the sea. There is even a legend in Inuit mythology, \"The Unwanted Child\", where a mother throws her child into the fjord.", "title": "History" }, { "paragraph_id": 64, "text": "The Yukon and the Mahlemuit tribes of Alaska exposed the female newborns by first stuffing their mouths with grass before leaving them to die. In Arctic Canada the Inuit exposed their babies on the ice and left them to die.", "title": "History" }, { "paragraph_id": 65, "text": "Female Inuit infanticide disappeared in the 1930s and 1940s after contact with the Western cultures from the South.", "title": "History" }, { "paragraph_id": 66, "text": "However, it must be acknowledged these infanticide claims came from non-Inuit observers, whose writings were later used to justify the forced westernization of indigenous peoples. Travis Hedwig argues that infanticide runs counter to cultural norms at the time and that researchers were misinterpreting the actions of an unfamiliar culture and people.", "title": "History" }, { "paragraph_id": 67, "text": "The Handbook of North American Indians reports infanticide among the Dene Natives and those of the Mackenzie Mountains.", "title": "History" }, { "paragraph_id": 68, "text": "In the Eastern Shoshone there was a scarcity of Native American women as a result of female infanticide. For the Maidu Native Americans twins were so dangerous that they not only killed them, but the mother as well. In the region known today as southern Texas, the Mariame Native Americans practiced infanticide of females on a large scale. Wives had to be obtained from neighboring groups.", "title": "History" }, { "paragraph_id": 69, "text": "Bernal Díaz recounted that, after landing on the Veracruz coast, they came across a temple dedicated to Tezcatlipoca. \"That day they had sacrificed two boys, cutting open their chests and offering their blood and hearts to that accursed idol\". In The Conquest of New Spain Díaz describes more child sacrifices in the towns before the Spaniards reached the large Aztec city Tenochtitlan.", "title": "History" }, { "paragraph_id": 70, "text": "Although academic data of infanticides among the indigenous people in South America is not as abundant as that of North America, the estimates seem to be similar.", "title": "History" }, { "paragraph_id": 71, "text": "The Tapirapé indigenous people of Brazil allowed no more than three children per woman, and no more than two of the same sex. If the rule was broken infanticide was practiced. The Bororo killed all the newborns that did not appear healthy enough. Infanticide is also documented in the case of the Korubo people in the Amazon.", "title": "History" }, { "paragraph_id": 72, "text": "The Yanomami men killed children while raiding enemy villages. Helena Valero, a Brazilian woman kidnapped by Yanomami warriors in the 1930s, witnessed a Karawetari raid on her tribe:", "title": "History" }, { "paragraph_id": 73, "text": "They killed so many. I was weeping for fear and for pity but there was nothing I could do. They snatched the children from their mothers to kill them, while the others held the mothers tightly by the arms and wrists as they stood up in a line. All the women wept. ... The men began to kill the children; little ones, bigger ones, they killed many of them.", "title": "History" }, { "paragraph_id": 74, "text": "While qhapaq hucha was practiced in the Peruvian large cities, child sacrifice in the pre-Columbian tribes of the region is less documented. However, even today studies on the Aymara Indians reveal high incidences of mortality among the newborn, especially female deaths, suggesting infanticide. The Abipones, a small tribe of Guaycuruan stock, of about 5,000 by the end of the 18th century in Paraguay, practiced systematic infanticide; with never more than two children being reared in one family. The Machigenga killed their disabled children. Infanticide among the Chaco in Paraguay was estimated as high as 50% of all newborns in that tribe, who were usually buried. The infanticidal custom had such roots among the Ayoreo in Bolivia and Paraguay that it persisted until the late 20th century.", "title": "History" }, { "paragraph_id": 75, "text": "Infanticide has become less common in the Western world. The frequency has been estimated to be 1 in approximately 3000 to 5000 children of all ages and 2.1 per 100,000 newborns per year. It is thought that infanticide today continues at a much higher rate in areas of extremely high poverty and overpopulation, such as parts of India. Female infants, then and even now, are particularly vulnerable, a factor in sex-selective infanticide. Recent estimates suggest that over 100 million girls and women are 'missing' in Asia.", "title": "Modern times" }, { "paragraph_id": 76, "text": "In spite of the fact that it is illegal, in Benin, West Africa, parents secretly continue with infanticidal customs.", "title": "Modern times" }, { "paragraph_id": 77, "text": "There have been some accusations that infanticide occurs in Mainland China due to the one-child policy. In the 1990s, a certain stretch of the Yangtze River was known to be a common site of infanticide by drowning, until government projects made access to it more difficult. A study from 2012 suggests that over 40 million girls and women are missing in Mainland China (Klasen and Wink 2002).", "title": "Modern times" }, { "paragraph_id": 78, "text": "The practice has continued in some rural areas of India. India has the highest infanticide rate in the world, despite infanticide being illegal.", "title": "Modern times" }, { "paragraph_id": 79, "text": "According to a 2005 report by the United Nations Children's Fund (UNICEF) up to 50 million girls and women are missing in India's population as a result of systematic sex discrimination and sex selective abortions.", "title": "Modern times" }, { "paragraph_id": 80, "text": "Killings of newborn babies have been on the rise in Pakistan, corresponding to an increase in poverty across the country. More than 1,000 infants, mostly girls, were killed or abandoned to die in Pakistan in 2009 according to a Pakistani charity organization.", "title": "Modern times" }, { "paragraph_id": 81, "text": "The Edhi Foundation found 1,210 dead babies in 2010. Many more are abandoned and left at the doorsteps of mosques. As a result, Edhi centers feature signs \"Do not murder, lay them here.\" Though female infanticide is punishable by life in prison, such crimes are rarely prosecuted.", "title": "Modern times" }, { "paragraph_id": 82, "text": "On November 28, 2008, The National, one of Papua New Guinea's two largest newspapers at the time, ran a story entitled \"Male Babies Killed To Stop Fights\" which claimed that in Agibu and Amosa villages of Gimi region of Eastern Highlands province of Papua New Guinea where tribal fighting in the region of Gimi has been going on since 1986 (many of the clashes arising over claims of sorcery) women had agreed that if they stopped producing males, allowing only female babies to survive, their tribe's stock of boys would go down and there would be no men in the future to fight. They had supposedly agreed to have all newborn male babies killed. It is not known how many male babies were supposedly killed by being smothered, but it had reportedly happened to all males over a 10-year period.", "title": "Modern times" }, { "paragraph_id": 83, "text": "However, this claim about male infanticide in Papua New Guinea was probably just the result of inaccurate and sensationalistic news reporting, because Salvation Army workers in the region of Gimi denied that the supposed male infanticide actually happened, and said that the tribal women were merely speaking hypothetically and hyperbolically about male infanticide at a peace and reconciliation workshop in order to make a point. The tribal women had never planned to actually kill their own sons.", "title": "Modern times" }, { "paragraph_id": 84, "text": "In England and Wales there were typically 30 to 50 homicides per million children less than 1 year old between 1982 and 1996. The younger the infant, the higher the risk. The rate for children 1 to 5 years was around 10 per million children. The homicide rate of infants less than 1 year is significantly higher than for the general population.", "title": "Modern times" }, { "paragraph_id": 85, "text": "In English law infanticide is established as a distinct offence by the Infanticide Acts. Defined as the killing of a child under 12 months of age by their mother, the effect of the Acts are to establish a partial defence to charges of murder.", "title": "Modern times" }, { "paragraph_id": 86, "text": "In the United States the infanticide rate during the first hour of life outside the womb dropped from 1.41 per 100,000 during 1963 to 1972 to 0.44 per 100,000 for 1974 to 1983; the rates during the first month after birth also declined, whereas those for older infants rose during this time. The legalization of abortion, which was completed in 1973, was the most important factor in the decline in neonatal mortality during the period from 1964 to 1977, according to a study by economists associated with the National Bureau of Economic Research.", "title": "Modern times" }, { "paragraph_id": 87, "text": "In Canada, 114 cases of infanticide by a parent were reported during 1964–1968.", "title": "Modern times" }, { "paragraph_id": 88, "text": "In Spain, far-right political party Vox has claimed that female perpetrators of infanticide outnumber male perpetrators of femicide. However, neither the Spanish National Statistics Institute nor the Ministry of the Interior keep data on the gender of perpetrators, but victims of femicide consistently number higher than victims of infanticide. From 2013 to March 2018, 28 infanticide cases perpetrated by 22 mothers and three stepmothers were reported in Spain.", "title": "Modern times" }, { "paragraph_id": 89, "text": "Intersex infants commonly suffer from infanticide particularly in developing countries, largely caused by stigma surrounding intersex conditions. Often intersex infants are abandoned, while others are actively killed. Many intersex individuals are forced to flee due to persecution and violence. Many intersex individuals commonly seek political asylum due to oppression according to the United Nations Human Rights Council.", "title": "Modern times" }, { "paragraph_id": 90, "text": "There are various reasons for infanticide. Neonaticide typically has different patterns and causes than for the killing of older infants. Traditional neonaticide is often related to economic necessity – the inability to provide for the infant.", "title": "Explanations for the practice" }, { "paragraph_id": 91, "text": "In the United Kingdom and the United States, older infants are typically killed for reasons related to child abuse, domestic violence or mental illness. For infants older than one day, younger infants are more at risk, and boys are more at risk than girls. Risk factors for the parent include: Family history of violence, violence in a current relationship, history of abuse or neglect of children, and personality disorder and/or depression.", "title": "Explanations for the practice" }, { "paragraph_id": 92, "text": "In the late 17th and early 18th centuries, \"loopholes\" were invented by some suicidal members of Lutheran churches who wanted to avoid the damnation that was promised by most Christian doctrine as a penalty of suicide. One famous example of someone who wished to end their life but avoid the eternity in hell was Christina Johansdotter (died 1740). She was a Swedish murderer who killed a child in Stockholm with the sole purpose of being executed. She is an example of those who seek suicide through execution by committing a murder. It was a common act, frequently targeting young children or infants as they were believed to be free from sin, thus believing to go \"straight to heaven\".", "title": "Explanations for the practice" }, { "paragraph_id": 93, "text": "Although mainstream Christian denominations, including Lutherans, view the murder of an innocent as being condemned in the Fifth Commandment, the suicidal members of Lutheran churches who deliberately killed children with the intent of getting executed were usually well aware of Christian doctrine against murder, and planned to repent and seek forgiveness of their sins afterwards. For example, in 18th century Denmark up until the year 1767, murderers were given the opportunity to repent of their sins before they were executed either way. In Denmark on the year of 1767, the religiously motivated suicidal murders finally ceased in that country with the abolishment of the death penalty.", "title": "Explanations for the practice" }, { "paragraph_id": 94, "text": "In 1888, Lieut. F. Elton reported that Ugi beach people in the Solomon Islands killed their infants at birth by burying them, and women were also said to practice abortion. They reported that it was too much trouble to raise a child, and instead preferred to buy one from the bush people.", "title": "Explanations for the practice" }, { "paragraph_id": 95, "text": "Many historians believe the reason to be primarily economic, with more children born than the family is prepared to support. In societies that are patrilineal and patrilocal, the family may choose to allow more sons to live and kill some daughters, as the former will support their birth family until they die, whereas the latter will leave economically and geographically to join their husband's family, possibly only after the payment of a burdensome dowry price. Thus the decision to bring up a boy is more economically rewarding to the parents. However, this does not explain why infanticide would occur equally among rich and poor, nor why it would be as frequent during decadent periods of the Roman Empire as during earlier, less affluent, periods.", "title": "Explanations for the practice" }, { "paragraph_id": 96, "text": "Before the appearance of effective contraception, infanticide was a common occurrence in ancient brothels. Unlike usual infanticide – where historically girls have been more likely to be killed – prostitutes in certain areas preferred to kill their male offspring.", "title": "Explanations for the practice" }, { "paragraph_id": 97, "text": "Instances of infanticide in Britain in 18th and 19th centuries are often attributed to the economic position of the women, with juries committing \"pious perjury\" in many subsequent murder cases. The knowledge of the difficulties faced in the 18th century by those women who attempted to keep their children can be seen as a reason for juries to show compassion. If the woman chose to keep the child, society was not set up to ease the pressure placed upon the woman, legally, socially or economically.", "title": "Explanations for the practice" }, { "paragraph_id": 98, "text": "In mid-18th century Britain there was assistance available for women who were not able to raise their children. The Foundling Hospital opened in 1756 and was able to take in some of the illegitimate children. However, the conditions within the hospital caused Parliament to withdraw funding and the governors to live off of their own incomes. This resulted in a stringent entrance policy, with the committee requiring that the hospital:", "title": "Explanations for the practice" }, { "paragraph_id": 99, "text": "Once a mother had admitted her child to the hospital, the hospital did all it could to ensure that the parent and child were not re-united.", "title": "Explanations for the practice" }, { "paragraph_id": 100, "text": "MacFarlane argues in Illegitimacy and Illegitimates in Britain (1980) that English society greatly concerned itself with the burden that a bastard child places upon its communities and had gone to some lengths to ensure that the father of the child is identified in order to maintain its well-being. Assistance could be gained through maintenance payments from the father, however, this was capped \"at a miserable 2 s and 6 d a week\". If the father fell behind with the payments he could only be asked \"to pay a maximum of 13 weeks arrears\".", "title": "Explanations for the practice" }, { "paragraph_id": 101, "text": "Despite the accusations of some that women were getting a free hand-out, there is evidence that many women were far from receiving adequate assistance from their parish. \"Within Leeds in 1822 ... relief was limited to 1 s per week\". Sheffield required women to enter the workhouse, whereas Halifax gave no relief to the women who required it. The prospect of entering the workhouse was certainly something to be avoided. Lionel Rose quotes Dr Joseph Rogers in Massacre of the Innocents ... (1986). Rogers, who was employed by a London workhouse in 1856 stated that conditions in the nursery were 'wretchedly damp and miserable ... [and] ... overcrowded with young mothers and their infants'.", "title": "Explanations for the practice" }, { "paragraph_id": 102, "text": "The loss of social standing for a servant girl was a particular problem in respect of producing a bastard child as they relied upon a good character reference in order to maintain their job and more importantly, to get a new or better job. In a large number of trials for the crime of infanticide, it is the servant girl that stood accused. The disadvantage of being a servant girl is that they had to live to the social standards of their superiors or risk dismissal and no references. Whereas within other professions, such as in the factory, the relationship between employer and employee was much more anonymous and the mother would be better able to make other provisions, such as employing a minder. The result of the lack of basic social care in Britain in the 18th and 19th century is the numerous accounts in court records of women, particularly servant girls, standing trial for the murder of their child.", "title": "Explanations for the practice" }, { "paragraph_id": 103, "text": "There may have been no specific offense of infanticide in England before about 1623 because infanticide was a matter for the by ecclesiastical courts, possibly because infant mortality from natural causes was high (about 15% or one in six).", "title": "Explanations for the practice" }, { "paragraph_id": 104, "text": "Thereafter the accusation of the suppression of bastard children by lewd mothers was a crime incurring the presumption of guilt.", "title": "Explanations for the practice" }, { "paragraph_id": 105, "text": "The Infanticide Acts are several laws. That of 1922 made the killing of an infant child by its mother during the early months of life as a lesser crime than murder. The acts of 1938 and 1939 abolished the earlier act, but introduced the idea that postpartum depression was legally to be regarded as a form of diminished responsibility.", "title": "Explanations for the practice" }, { "paragraph_id": 106, "text": "Marvin Harris estimated that among Paleolithic hunters 23–50% of newborn children were killed. He argued that the goal was to preserve the 0.001% population growth of that time. He also wrote that female infanticide may be a form of population control. Population control is achieved not only by limiting the number of potential mothers; increased fighting among men for access to relatively scarce wives would also lead to a decline in population. For example, on the Melanesian island of Tikopia infanticide was used to keep a stable population in line with its resource base. Research by Marvin Harris and William Divale supports this argument, it has been cited as an example of environmental determinism.", "title": "Explanations for the practice" }, { "paragraph_id": 107, "text": "Evolutionary psychology has proposed several theories for different forms of infanticide. Infanticide by stepfathers, as well as child abuse in general by stepfathers, has been explained by spending resources on not genetically related children reducing reproductive success (See the Cinderella effect and Infanticide (zoology)). Infanticide is one of the few forms of violence more often done by women than men. Cross-cultural research has found that this is more likely to occur when the child has deformities or illnesses as well as when there are lacking resources due to factors such as poverty, other children requiring resources, and no male support. Such a child may have a low chance of reproductive success in which case it would decrease the mother's inclusive fitness, in particular since women generally have a greater parental investment than men, to spend resources on the child.", "title": "Explanations for the practice" }, { "paragraph_id": 108, "text": "A minority of academics subscribe to an alternate school of thought, considering the practice as \"early infanticidal childrearing\". They attribute parental infanticidal wishes to massive projection or displacement of the parents' unconscious onto the child, because of intergenerational, ancestral abuse by their own parents. Clearly, an infanticidal parent may have multiple motivations, conflicts, emotions, and thoughts about their baby and their relationship with their baby, which are often colored both by their individual psychology, current relational context and attachment history, and, perhaps most saliently, their psychopathology Almeida, Merminod, and Schechter suggest that parents with fantasies, projections, and delusions involving infanticide need to be taken seriously and assessed carefully, whenever possible, by an interdisciplinary team that includes infant mental health specialists or mental health practitioners who have experience in working with parents, children, and families.", "title": "Explanations for the practice" }, { "paragraph_id": 109, "text": "In addition to debates over the morality of infanticide itself, there is some debate over the effects of infanticide on surviving children, and the effects of childrearing in societies that also sanction infanticide. Some argue that the practice of infanticide in any widespread form causes enormous psychological damage in children. Conversely, studying societies that practice infanticide Géza Róheim reported that even infanticidal mothers in New Guinea, who ate a child, did not affect the personality development of the surviving children; that \"these are good mothers who eat their own children\". Harris and Divale's work on the relationship between female infanticide and warfare suggests that there are, however, extensive negative effects.", "title": "Explanations for the practice" }, { "paragraph_id": 110, "text": "Postpartum psychosis is also a causative factor of infanticide. Stuart S. Asch, MD, a professor of psychiatry at Cornell University established the connections between some cases of infanticide and post-partum depression. The books, From Cradle to Grave, and The Death of Innocents, describe selected cases of maternal infanticide and the investigative research of Professor Asch working in concert with the New York City Medical Examiner's Office. Stanley Hopwood wrote that childbirth and lactation entail severe stress on the female sex, and that under certain circumstances attempts at infanticide and suicide are common. A study published in the American Journal of Psychiatry revealed that 44% of filicidal fathers had a diagnosis of psychosis. In addition to postpartum psychosis, dissociative psychopathology and sociopathy have also been found to be associated with neonaticide in some cases", "title": "Explanations for the practice" }, { "paragraph_id": 111, "text": "In addition, severe postpartum depression can lead to infanticide.", "title": "Explanations for the practice" }, { "paragraph_id": 112, "text": "Sex selection may be one of the contributing factors of infanticide. In the absence of sex-selective abortion, sex-selective infanticide can be deduced from very skewed birth statistics. The biologically normal sex ratio for humans at birth is approximately 105 males per 100 females; normal ratios hardly ranging beyond 102–108. When a society has an infant male to female ratio which is significantly higher or lower than the biological norm, and biased data can be ruled out, sex selection can usually be inferred. Intersex infants with ambiguous or atypical genitalia often suffer from infanticide.", "title": "Explanations for the practice" }, { "paragraph_id": 113, "text": "In New South Wales, infanticide is defined in Section 22A(1) of the Crimes Act 1900 (NSW) as follows:", "title": "Current law" }, { "paragraph_id": 114, "text": "Where a woman by any willful act or omission causes the death of her child, being a child under the age of twelve months, but at the time of the act or omission the balance of her mind was disturbed by reason of her not having fully recovered from the effect of giving birth to the child or by reason of the effect of lactation consequent upon the birth of the child, then, notwithstanding that the circumstances were such that but for this section the offense would have amounted to murder, she shall be guilty of infanticide, and may for such offense be dealt with and punished as if she had been guilty of the offense of manslaughter of such child.", "title": "Current law" }, { "paragraph_id": 115, "text": "Because Infanticide is punishable as manslaughter, as per s24, the maximum penalty for this offence is therefore 25 years imprisonment.", "title": "Current law" }, { "paragraph_id": 116, "text": "In Victoria, infanticide is defined by Section 6 of the Crimes Act of 1958 with a maximum penalty of five years.", "title": "Current law" }, { "paragraph_id": 117, "text": "In New Zealand, infanticide is provided for by Section 178 of the Crimes Act 1961 which states:", "title": "Current law" }, { "paragraph_id": 118, "text": "Where a woman causes the death of any child of hers under the age of 10 years in a manner that amounts to culpable homicide, and where at the time of the offence the balance of her mind was disturbed, by reason of her not having fully recovered from the effect of giving birth to that or any other child, or by reason of the effect of lactation, or by reason of any disorder consequent upon childbirth or lactation, to such an extent that she should not be held fully responsible, she is guilty of infanticide, and not of murder or manslaughter, and is liable to imprisonment for a term not exceeding 3 years.", "title": "Current law" }, { "paragraph_id": 119, "text": "In Canada, infanticide is a specific offence under section 237 of the Criminal Code. It is defined as a form of culpable homicide which is neither murder nor manslaughter, and occurs when \"a female person... by a wilful act or omission... causes the death of her newly-born child [defined as a child under one year of age], if at the time of the act or omission she is not fully recovered from the effects of giving birth to the child and by reason thereof or of the effect of lactation consequent on the birth of the child her mind is then disturbed.\" Infanticide is also a defence to murder, in that a person accused of murder who successfully presents the defence is entitled to be convicted of infanticide rather than murder. The maximum sentence for infanticide is five years' imprisonment; by contrast, the maximum sentence for manslaughter is life, and the mandatory sentence for murder is life.", "title": "Current law" }, { "paragraph_id": 120, "text": "The offence derives from an offence created in English law in 1922, which aimed to address the issue of judges and juries who were reluctant to return verdicts of murder against women and girls who killed their newborns out of poverty, depression, the shame of illegitimacy, or otherwise desperate circumstances, since the mandatory sentence was death (even though in those circumstances the death penalty was likely not to be carried out). With infanticide as a separate offence with a lesser penalty, convictions were more likely. The offence of infanticide was created in Canada in 1948.", "title": "Current law" }, { "paragraph_id": 121, "text": "There is ongoing debate in the Canadian legal and political fields about whether section 237 of the Criminal Code should be amended or abolished altogether.", "title": "Current law" }, { "paragraph_id": 122, "text": "In England and Wales, the Infanticide Act 1938 describes the offense of infanticide as one which would otherwise amount to murder (by their mother) if the victim was older than 12 months and the mother did not have an \"imbalance of mind\" due to the effects of childbirth or lactation. Where a mother who has killed such an infant has been charged with murder rather than infanticide s.1(3) of the Act confirms that a jury has the power to find alternative verdicts of Manslaughter in English law or guilty but insane.", "title": "Current law" }, { "paragraph_id": 123, "text": "Infanticide is illegal in the Netherlands, although the maximum sentence is lower than for homicide. The Groningen Protocol regulates euthanasia for infants who are believed to \"suffer hopelessly and unbearably\" under strict conditions.", "title": "Current law" }, { "paragraph_id": 124, "text": "Article 149 of the Penal Code of Poland stipulates that a mother who kills her child in labour, while under the influence of the course of the delivery, is punishable by imprisonment of three months to five years.", "title": "Current law" }, { "paragraph_id": 125, "text": "Article 200 of the Penal Code of Romania stipulates that the killing of a newborn during the first 24 hours, by the mother who is in a state of mental distress, shall be punished with imprisonment of one to five years. The previous Romanian Penal Code also defined infanticide (pruncucidere) as a distinct criminal offense, providing for punishment of two to seven years imprisonment, recognizing the fact that a mother's judgment may be impaired immediately after birth but did not define the term \"infant\", and this had led to debates regarding the precise moment when infanticide becomes homicide. This issue was resolved by the new Penal Code, which came into force in 2014.", "title": "Current law" }, { "paragraph_id": 126, "text": "While legislation regarding infanticide in some countries focuses on rehabilitation, believing that treatment and education will prevent repetitive action, the United States remains focused on delivering punishment. One justification for punishment is the difficulty of implementing rehabilitation services. With an overcrowded prison system, the United States can not provide the necessary treatment and services.", "title": "Current law" }, { "paragraph_id": 127, "text": "In 2009, Texas state representative Jessica Farrar proposed legislation that would define infanticide as a distinct and lesser crime than homicide. Under the terms of the proposed legislation, if jurors concluded that a mother's \"judgment was impaired as a result of the effects of giving birth or the effects of lactation following the birth,\" they would be allowed to convict her of the crime of infanticide, rather than murder. The maximum penalty for infanticide would be two years in prison. Farrar's introduction of this bill prompted liberal bioethics scholar Jacob M. Appel to call her \"the bravest politician in America\".", "title": "Current law" }, { "paragraph_id": 128, "text": "The MOTHERS Act (Moms Opportunity To access Health, Education, Research and Support), precipitated by the death of a Chicago woman with postpartum psychosis was introduced in 2009. The act was ultimately incorporated into the Patient Protection and Affordable Care Act which passed in 2010. The act requires screening for postpartum mood disorders at any time of the adult lifespan as well as expands research on postpartum depression. Provisions of the act also authorize grants to support clinical services for women who have, or are at risk for, postpartum psychosis.", "title": "Current law" }, { "paragraph_id": 129, "text": "Since infanticide, especially neonaticide, is often a response to an unwanted birth, preventing unwanted pregnancies through improved sex education and increased contraceptive access are advocated as ways of preventing infanticide. Increased use of contraceptives and access to safe legal abortions have greatly reduced neonaticide in many developed nations. Some say that where abortion is illegal, as in Pakistan, infanticide would decline if safer legal abortions were available.", "title": "Prevention" }, { "paragraph_id": 130, "text": "Cases of infanticide have also garnered increasing attention and interest from advocates for the mentally ill as well as organizations dedicated to postpartum disorders. Following the trial of Andrea Yates, a mother from the United States who garnered national attention for drowning her 5 children, representatives from organizations such as the Postpartum Support International and the Marcé Society for Treatment and Prevention of Postpartum Disorders began requesting clarification of diagnostic criteria for postpartum disorders and improved guidelines for treatments. While accounts of postpartum psychosis have dated back over 2,000 years ago, perinatal mental illness is still largely under-diagnosed despite postpartum psychosis affecting 1 to 2 per 1000 women. However, with clinical research continuing to demonstrate the large role of rapid neurochemical fluctuation in postpartum psychosis, prevention of infanticide points ever strongly towards psychiatric intervention.", "title": "Prevention" }, { "paragraph_id": 131, "text": "Screening for psychiatric disorders or risk factors, and providing treatment or assistance to those at risk may help prevent infanticide. Current diagnostic considerations include symptoms, psychological history, thoughts of self-harm or harming one's children, physical and neurological examination, laboratory testing, substance abuse, and brain imaging. As psychotic symptoms may fluctuate, it is important that diagnostic assessments cover a wide range of factors.", "title": "Prevention" }, { "paragraph_id": 132, "text": "While studies on the treatment of postpartum psychosis are scarce, a number of case and cohort studies have found evidence describing the effectiveness of lithium monotherapy for both acute and maintenance treatment of postpartum psychosis, with the majority of patients achieving complete remission. Adjunctive treatments include electroconvulsive therapy, antipsychotic medication, or benzodiazepines. Electroconvulsive therapy, in particular, is the primary treatment for patients with catatonia, severe agitation, and difficulties eating or drinking. Antidepressants should be avoided throughout the acute treatment of postpartum psychosis due to risk of worsening mood instability.", "title": "Prevention" }, { "paragraph_id": 133, "text": "Though screening and treatment may help prevent infanticide, in the developed world, significant proportions of neonaticides that are detected occur in young women who deny their pregnancy and avoid outside contacts, many of whom may have limited contact with these health care services.", "title": "Prevention" }, { "paragraph_id": 134, "text": "In some areas baby hatches or safe surrender sites, safe places for a mother to anonymously leave an infant, are offered, in part to reduce the rate of infanticide. In other places, like the United States, safe-haven laws allow mothers to anonymously give infants to designated officials; they are frequently located at hospitals and police and fire stations. Additionally, some countries in Europe have the laws of anonymous birth and confidential birth that allow mothers to give up an infant after birth. In anonymous birth, the mother does not attach her name to the birth certificate. In confidential birth, the mother registers her name and information, but the document containing her name is sealed until the child comes to age. Typically such babies are put up for adoption, or cared for in orphanages.", "title": "Prevention" }, { "paragraph_id": 135, "text": "Granting women employment raises their status and autonomy. Having a gainful employment can raise the perceived worth of females. This can lead to an increase in the number of women getting an education and a decrease in the number of female infanticide. As a result, the infant mortality rate will decrease and economic development will increase.", "title": "Prevention" }, { "paragraph_id": 136, "text": "The practice has been observed in many other species of the animal kingdom since it was first seriously studied by Yukimaru Sugiyama. These include from microscopic rotifers and insects, to fish, amphibians, birds and mammals, including primates such as chacma baboons.", "title": "In animals" }, { "paragraph_id": 137, "text": "According to studies carried out by Kyoto University in primates, including certain types of gorillas and chimpanzees, several conditions favor the tendency to kill their offspring in some species (to be performed only by males), among them are: Nocturnal life, the absence of nest construction, the marked sexual dimorphism in which the male is much larger than the female, the mating in a specific season and the high period of lactation without resumption of the estrus state in the female.", "title": "In animals" }, { "paragraph_id": 138, "text": "An instance in which a child born on an inauspicious day is to live or die according to the chance of being trampled by cattle (death being likely) is provided by Infanticide in Madagascar., painted by Henry Melville and engraved by J Redaway for Fisher's Drawing Room Scrap Book, 1838 with a poetical illustration and notes by Letitia Elizabeth Landon.", "title": "In Art and Literature" } ]
Infanticide is the intentional killing of infants or offspring. Infanticide was a widespread practice throughout human history that was mainly used to dispose of unwanted children, its main purpose being the prevention of resources being spent on weak or disabled offspring. Unwanted infants were usually abandoned to die of exposure, but in some societies they were deliberately killed. Infanticide is broadly illegal, but in some places the practice is tolerated, or the prohibition is not strictly enforced. Most Stone Age human societies routinely practiced infanticide, and estimates of children killed by infanticide in the Mesolithic and Neolithic eras vary from 15 to 50 percent. Infanticide continued to be common in most societies after the historical era began, including ancient Greece, ancient Rome, the Phoenicians, ancient China, ancient Japan, Pre-Islamic Arabia, Aboriginal Australia, Native Americans, and Native Alaskans. Infanticide became forbidden in Europe and the Near East during the 1st millennium. Christianity forbade infanticide from its earliest times, which led Constantine the Great and Valentinian I to ban infanticide across the Roman Empire in the 4th century. Yet, infanticide was not unacceptable in some wars, and infanticide in Europe reached its peak during World War II (1939–45), during the Holocaust and the T4 Program. The practice ceased in Arabia in the 7th century after the founding of Islam, since the Quran prohibits infanticide. Infanticide of male babies had become uncommon in China by the Ming dynasty (1368–1644), whereas infanticide of female babies became more common during the One-Child Policy era (1979–2015). During the period of Company rule in India, the East India Company attempted to eliminate infanticide but were only partially successful, and female infanticide in some parts of India still continues. Infanticide is very rare in industrialised countries but may persist elsewhere. Parental infanticide researchers have found that mothers are more likely to commit infanticide. In the special case of neonaticide, mothers account for almost all the perpetrators. Fatherly cases of neonaticide are so rare that they are individually recorded.
2002-02-25T15:43:11Z
2023-12-30T07:50:53Z
[ "Template:Circa", "Template:Div col", "Template:Reflist", "Template:Cite journal", "Template:Cite Polish law", "Template:CE", "Template:Citation needed", "Template:Ill", "Template:See also", "Template:RP", "Template:Cite magazine", "Template:Cite Legislation AU", "Template:About", "Template:Homicide", "Template:Rp", "Template:Commons category", "Template:Cbignore", "Template:Short description", "Template:TOC limit", "Template:Qref", "Template:Cite book", "Template:Cite web", "Template:ISBN", "Template:Wiktionary", "Template:Main", "Template:Blockquote", "Template:How", "Template:Cite AustLII", "Template:Wikiquote", "Template:Authority control", "Template:Further", "Template:Attribution needed", "Template:BCE", "Template:Dead link", "Template:Ws", "Template:Webarchive", "Template:Div col end", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Infanticide
15,476
Internet protocol suite
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications. The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a "Networking Working Group" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community and the International Networking Working Group, which Cerf chaired. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974. Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagram. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Danny Cohen, who needed it for his packet voice work; Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development; and the research group of Robert Metcalfe at Xerox PARC. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 3 of TCP, written in 1978, Cerf, Cohen and Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service. The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested. DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6). In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983. A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways. In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated. In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting. IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin. Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP). The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. Microsoft released a native TCP/IP stack in Windows 95. This event helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS). Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF). The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specification of the suite is RFC 1122, which broadly outlines four abstraction layers. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle. The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features." Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level. An early architectural document, RFC 1122, titled Host Requirements, emphasizes architectural principles over layering. RFC 1122 is structured in sections referring to layers, but the document refers to many other architectural principles, and does not emphasize layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows: The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels. The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model. The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model. Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006. The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers). For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services. Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability. TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream: The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP). Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC). The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media. The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications. The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer. QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC. The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer. The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model. Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application. At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol. Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload. The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools. The following table shows various such networking models. The number of layers varies between three and seven. Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources. The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport. Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2. The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful". For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange. Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, RFC 1812 include routing protocols in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers. IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer. The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
[ { "paragraph_id": 0, "text": "The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA.", "title": "" }, { "paragraph_id": 1, "text": "The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications.", "title": "" }, { "paragraph_id": 2, "text": "The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.", "title": "" }, { "paragraph_id": 3, "text": "Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a \"Networking Working Group\" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community and the International Networking Working Group, which Cerf chaired.", "title": "History" }, { "paragraph_id": 4, "text": "By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974.", "title": "History" }, { "paragraph_id": 5, "text": "Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagram. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Danny Cohen, who needed it for his packet voice work; Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development; and the research group of Robert Metcalfe at Xerox PARC. Postel stated, \"We are screwing up in our design of Internet protocols by violating the principle of layering.\" Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 3 of TCP, written in 1978, Cerf, Cohen and Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service.", "title": "History" }, { "paragraph_id": 6, "text": "The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over \"two tin cans and a string.\" Years later, as a joke, the IP over Avian Carriers formal protocol specification was created and successfully tested.", "title": "History" }, { "paragraph_id": 7, "text": "DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6).", "title": "History" }, { "paragraph_id": 8, "text": "In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983.", "title": "History" }, { "paragraph_id": 9, "text": "A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways.", "title": "History" }, { "paragraph_id": 10, "text": "In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated.", "title": "History" }, { "paragraph_id": 11, "text": "In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting.", "title": "History" }, { "paragraph_id": 12, "text": "IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin.", "title": "History" }, { "paragraph_id": 13, "text": "Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP).", "title": "History" }, { "paragraph_id": 14, "text": "The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. Microsoft released a native TCP/IP stack in Windows 95. This event helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).", "title": "History" }, { "paragraph_id": 15, "text": "Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks.", "title": "History" }, { "paragraph_id": 16, "text": "The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF).", "title": "History" }, { "paragraph_id": 17, "text": "The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specification of the suite is RFC 1122, which broadly outlines four abstraction layers. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.", "title": "History" }, { "paragraph_id": 18, "text": "The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle.", "title": "Key architectural principles" }, { "paragraph_id": 19, "text": "The robustness principle states: \"In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).\" \"The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features.\"", "title": "Key architectural principles" }, { "paragraph_id": 20, "text": "Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level.", "title": "Key architectural principles" }, { "paragraph_id": 21, "text": "An early architectural document, RFC 1122, titled Host Requirements, emphasizes architectural principles over layering. RFC 1122 is structured in sections referring to layers, but the document refers to many other architectural principles, and does not emphasize layering. It loosely defines a four-layer model, with the layers having names, not numbers, as follows:", "title": "Key architectural principles" }, { "paragraph_id": 22, "text": "The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels.", "title": "Link layer" }, { "paragraph_id": 23, "text": "The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model.", "title": "Link layer" }, { "paragraph_id": 24, "text": "The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model.", "title": "Link layer" }, { "paragraph_id": 25, "text": "Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet.", "title": "Internet layer" }, { "paragraph_id": 26, "text": "The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively.", "title": "Internet layer" }, { "paragraph_id": 27, "text": "The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006.", "title": "Internet layer" }, { "paragraph_id": 28, "text": "The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers).", "title": "Transport layer" }, { "paragraph_id": 29, "text": "For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services.", "title": "Transport layer" }, { "paragraph_id": 30, "text": "Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability.", "title": "Transport layer" }, { "paragraph_id": 31, "text": "TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream:", "title": "Transport layer" }, { "paragraph_id": 32, "text": "The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP).", "title": "Transport layer" }, { "paragraph_id": 33, "text": "Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC).", "title": "Transport layer" }, { "paragraph_id": 34, "text": "The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media.", "title": "Transport layer" }, { "paragraph_id": 35, "text": "The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications.", "title": "Transport layer" }, { "paragraph_id": 36, "text": "The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer.", "title": "Transport layer" }, { "paragraph_id": 37, "text": "QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC.", "title": "Transport layer" }, { "paragraph_id": 38, "text": "The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer.", "title": "Application layer" }, { "paragraph_id": 39, "text": "The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model.", "title": "Application layer" }, { "paragraph_id": 40, "text": "Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application.", "title": "Application layer" }, { "paragraph_id": 41, "text": "At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol.", "title": "Application layer" }, { "paragraph_id": 42, "text": "Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload.", "title": "Application layer" }, { "paragraph_id": 43, "text": "The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools.", "title": "Layering evolution and representations in the literature" }, { "paragraph_id": 44, "text": "The following table shows various such networking models. The number of layers varies between three and seven.", "title": "Layering evolution and representations in the literature" }, { "paragraph_id": 45, "text": "Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources.", "title": "Layering evolution and representations in the literature" }, { "paragraph_id": 46, "text": "The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport.", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 47, "text": "Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer.", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 48, "text": "Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2.", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 49, "text": "The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: \"Layering Considered Harmful\".", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 50, "text": "For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange.", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 51, "text": "Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, RFC 1812 include routing protocols in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers.", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 52, "text": "IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer.", "title": "Comparison of TCP/IP and OSI layering" }, { "paragraph_id": 53, "text": "The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.", "title": "Implementations" } ]
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications. The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
2001-10-29T03:36:58Z
2023-12-21T13:32:13Z
[ "Template:Use American English", "Template:Reflist", "Template:Cite IETF", "Template:Cite magazine", "Template:Short description", "Template:About", "Template:IETF RFC", "Template:Cite journal", "Template:Cite ISO standard", "Template:Wikiversity", "Template:Further", "Template:N/a", "Template:Unreferenced section", "Template:Cite RFC", "Template:Cite book", "Template:Cite news", "Template:Cite thesis", "Template:Cite conference", "Template:Use mdy dates", "Template:IPstack", "Template:Citation needed", "Template:See also", "Template:Cn", "Template:Cite web", "Template:Citation" ]
https://en.wikipedia.org/wiki/Internet_protocol_suite
15,477
Ibn al-Shaykh al-Libi
Ibn al-Shaykh al-Libi (Arabic: إبْنُ ٱلشَّيْخِ اللّيبي; born Ali Mohamed Abdul Aziz al-Fakheri; 1963 – May 10, 2009) was a Libyan national captured in Afghanistan in November 2001 after the fall of the Taliban; he was interrogated by American and Egyptian forces. The information he gave under torture to Egyptian authorities was cited by the George W. Bush Administration in the months preceding its 2003 invasion of Iraq as evidence of a connection between Saddam Hussein and al-Qaeda. That information was frequently repeated by members of the Bush Administration, although reports from both the Central Intelligence Agency (CIA) and the Defense Intelligence Agency (DIA) strongly questioned its credibility, suggesting that al-Libi was "intentionally misleading" interrogators. In 2006, the United States transferred al-Libi to Libya, where he was imprisoned by the government. He was reported to have tuberculosis. On May 19, 2009, the government reported that he had recently committed suicide in prison. Human Rights Watch, whose representatives had recently visited him, called for an investigation into the circumstances of his death; The New York Times reported that Ayman al-Zawahiri had asserted that Libya had tortured al-Libi to death. In Afghanistan, al-Libi led the Al Khaldan training camp, where Zacarias Moussaoui and Ahmed Ressam trained for attacks in the United States. An associate of Abu Zubaydah, al-Libi had his assets frozen by the U.S. government following the September 11 attacks; on September 26, 2002, the U.S. government published a list of terrorists who were covered by this restriction. The Uyghur Turkistan Islamic Party's "Islamic Turkistan" magazine in its 5th edition published an obituary of its member Turghun (Ibn Umar al Turkistani) speaking of his time training at the Al Khaldan training camp and his meeting with Ibn al-Shaykh al-Libi. The Uyghurs in Afghanistan fought against the American bombing and the Northern Alliance after the September 11, 2001, attacks. Ibn Umar died fighting against Americans at the Qalai Jangi prison riot. Al-Libi was captured by Pakistani officials in November 2001, as he attempted to flee Afghanistan following the collapse of the Taliban after the 2001 U.S. invasion of Afghanistan, and was transferred to the US military in January 2002. Department of Defense spokesmen used to routinely describe the Khaldan training camp as an al-Qaeda training camp, and Al-Libi and Abu Zubaydah as senior members of al-Qaeda. But, during testimony at their Combatant Status Review Tribunals, several Guantanamo captives, including Zubaydah, described the Khaldan camp as having been run by a rival jihadist organization – one that did not support attacking civilians. Al-Libi was turned over to the FBI and held at Bagram Air Base. When talking to the FBI interrogators Russell Fincher and Marty Mahon, he seemed "genuinely friendly" and spoke chiefly in English, calling for a translator only when necessary. He seemed to bond with Fincher, a devout Christian, and the two prayed together and discussed religion at length. Al-Libi told the interrogators details about Richard Reid, a British citizen who had joined al-Qaeda and trained to carry out a suicide bombing of an airliner, which he unsuccessfully attempted on December 22, 2001. Al-Libi agreed to continue cooperating if the United States would allow his wife and her family to emigrate, while he was prosecuted within the American legal system. The CIA asked President Bush for permission to take al-Libi into their own custody and rendition him to a foreign country for more "tough guy" questioning, and were granted permission. They "simply came and took al-Libi away from the FBI." One CIA officer was heard telling their new prisoner that "You know where you are going. Before you get there, I am going to find your mother and fuck her". In the second week of January 2002, al-Libi was flown to the USS Bataan in the northern Arabian Sea, a ship being used to hold eight other notable prisoners, including John Walker Lindh. He was subsequently transferred to Egyptian interrogators. According to The Washington Post, Under questioning, al-Libi provided the CIA with intelligence about an alleged plot to blow up the U.S. Embassy in Yemen with a truck bomb and pointed officials in the direction of Abu Zubaydah, a top al Qaeda leader known to have been involved in the Sept. 11 plot. On September 15, 2002, Time published an article that detailed the CIA interrogations of Omar al-Faruq. It said, On Sept. 9, according to a secret CIA summary of the interview, al-Faruq confessed that he was, in fact, al-Qaeda's senior representative in Southeast Asia. Then came an even more shocking confession: according to the CIA document, al-Faruq said two senior al-Qaeda officials, Abu Zubaydah and Ibn al-Shaykh al-Libi, had ordered him to 'plan large-scale attacks against U.S. interests in Indonesia, Malaysia, the Philippines, Singapore, Thailand, Taiwan, Vietnam and Cambodia.' Al-Libi has been identified as a principal source of faulty prewar intelligence regarding chemical weapons training between Iraq and al-Qaeda that was used by the Bush Administration to justify the invasion of Iraq. Specifically, he told interrogators that Iraq provided training to al-Qaeda in the area of "chemical and biological weapons". In Cincinnati in October 2002, Bush informed the public: "Iraq has trained al Qaeda members in bomb making and poisons and gases." This claim was repeated several times in the run-up to the war, including in then-Secretary of State Colin Powell's speech to the U.N Security Council on February 5, 2003, which concluded with a long recitation of the information provided by al-Libi. Powell's speech was made less than a month after a then-classified CIA report concluded that the information provided by al-Libi was unreliable, and about a year after a DIA report concluded the same thing. Al-Libi recanted these claims in January 2004 after U.S. interrogators presented "new evidence from other detainees that cast doubt on his claims", according to Newsweek. The DIA concluded in February 2002 that al-Libi deliberately misled interrogators, in what the CIA called an "attempt to exaggerate his importance". Some speculate that his reason for giving disinformation was in order to draw the U.S. into an attack on Iraq—Islam's "weakest" state; a remark attributed to al-Libi—which al-Qaeda believes will lead to a global jihad. Others, including al-Libi himself, have insisted that he gave false information due to the use of torture (so-called "enhanced interrogation techniques"). An article published in the November 5, 2005, The New York Times quoted two paragraphs of a Defense Intelligence Agency report, declassified upon request by Senator Carl Levin, that expressed doubts about the results of al-Libi's interrogation in February 2002. Al-Libi told a foreign intelligence service that: Iraq — acting on the request of al-Qa'ida militant Abu Abdullah, who was Muhammad Atif's emissary — agreed to provide unspecified chemical or biological weapons training for two al-Qa'ida associates beginning in December 2000. The two individuals departed for Iraq but did not return, so al-Libi was not in a position to know if any training had taken place. The September 2002 version of Iraqi Support for Terrorism stated that al-Libi said Iraq had "provided" chemical and biological weapons training for two al-Qaeda associates in 2000, but also stated that al-Libi "did not know the results of the training." The 2006 Senate Report on Pre-war Intelligence on Iraq stated that "Although DIA coordinated on CIA's Iraqi Support for Terrorism paper, DIA analysis preceding that assessment was more skeptical of the al-Libi reporting." In July 2002, DIA assessed It is plausible al-Qa'ida attempted to obtain CB assistance from Iraq and Ibn al-Shaykh is sufficiently senior to have access to such sensitive information. However, Ibn al-Shaykh's information lacks details concerning the individual Iraqis involved, the specific CB materials associated with the assistance and the location where the alleged training occurred. The information is also second hand, and not derived from Ibn al-Shaykh's personal experience. The Senate report also states "According to al-Libi, after his decision to fabricate information for debriefers, he 'lied about being a member of al-Qa'ida. Although he considered himself close to, but not a member of, al-Qa'ida, he knew enough about the senior members, organization and operations to claim to be a member.'" On September 8, 2006, the United States Senate Select Committee on Intelligence released "Phase II" of its report on prewar intelligence on Iraq. Conclusion 3 of the report states the following: Postwar findings support the DIA February 2002 assessment that Ibn al-Shaykh al-Libi was likely intentionally misleading his debriefers when he said that Iraq provided two al-Qa'ida associates with chemical and biological weapons (CBW) training in 2000 ... Postwar findings do not support the CIA's assessment that his reporting was credible ... No postwar information has been found that indicates CBW training occurred and the detainee who provided the key prewar reporting about this training recanted his claims after the war ... CIA's January 2003 version of Iraqi Support for Terrorism described al-Libi's reporting for CBW training "credible", but noted that the individuals who traveled to Iraq for CBW training had not returned, so al-Libi was not in position to know if the training had taken place ... In January 2004, al-Libi recanted his allegations about CBW training and many of his other claims about Iraq's links to al Qa'ida. He told debriefers that, to the best of his knowledge, al-Qa'ida never sent any individuals into Iraq for any kind of support in chemical or biological weapons. Al-libi told debriefers that he fabricated information while in U.S. custody to receive better treatment and in response to threats of being transferred to a foreign intelligence service which he believed would torture him ... He said that later, while he was being debriefed by a (REDACTED) foreign intelligence service, he fabricated more information in response to physical abuse and threats of torture. The foreign government service denies using any pressure during al-Libi's interrogation. In February 2004, the CIA reissued the debriefing reports from al-Libi to note that he had recanted information. A CIA officer explained that while CIA believes al-Libi fabricated information, the CIA cannot determine whether, or what portions of, the original statements or the later recants are true or false. On June 11, 2008, Newsweek published an account of material from a "previously undisclosed CIA report written in the summer of 2002". The article reported that on August 7, 2002, CIA analysts had drafted a high-level report that expressed serious doubts about the information flowing from al-Libi's interrogation. The information that al-Libi acknowledged being a member of al-Qaeda's executive council was not supported by other sources. According to al-Libi, in Egypt he was locked in a tiny box less than 20 inches high and held for 17 hours and after being let out he was thrown to the floor and punched for 15 minutes. According to CIA operational cables, only then did he tell his "fabricated" story about al-Qaeda members being dispatched to Iraq. In November 2006, a Moroccan using the pseudonym Omar Nasiri, having infiltrated al-Qaeda in the 1990s, wrote the book, Inside the Jihad: My Life with al Qaeda, a Spy's story. In the book, Nasiri claims that al-Libi deliberately planted information to encourage the U.S. to invade Iraq. In an interview with BBC2's Newsnight, Nasiri said Libi "needed the conflict in Iraq because months before I heard him telling us when a question was asked in the mosque after the prayer in the evening, where is the best country to fight the jihad?" Nasiri said that Libi had identified Iraq as the "weakest" Muslim country. He suggested to Newsnight that al-Libi wanted to overthrow Saddam and use Iraq as a jihadist base. Nasiri describes al-Libi as one of the leaders at the Afghan camp, and characterizes him as "brilliant in every way." He said that learning how to withstand interrogations and supply false information was a key part of the training in the camps. Al-Libi "knew what his interrogators wanted, and he was happy to give it to them. He wanted to see Saddam toppled even more than the Americans did." In April 2007, former Director of Central Intelligence George Tenet released his memoir titled At the Center of the Storm: My Years at the CIA. With regard to al-Libi, Tenet writes the following: We believed that al-Libi was withholding critical threat information at the time, so we transferred him to a third country for further debriefing. Allegations were made that we did so knowing that he would be tortured, but this is false. The country in question understood and agreed that they would hold al-Libi for a limited period. In the course of questioning while he was in U.S. custody in Afghanistan, al-Libi made initial references to possible al-Qa'ida training in Iraq. He offered up information that a militant known as Abu Abdullah had told him that at least three times between 1997 and 2000, the now-deceased al-Qa'ida leader Mohammad Atef had sent Abu Abdullah to Iraq to seek training in poisons and mustard gas. Another senior al-Qa'ida detainee told us that Mohammad Atef was interested in expanding al-Qa-ida's ties to Iraq, which, in our eyes, added credibility to the reporting. Then, shortly after the Iraq war got under way, al-Libi recanted his story. Now, suddenly, he was saying that there was no such cooperative training. Inside the CIA, there was sharp division on his recantation. It led us to recall his reporting, and here is where the mystery begins. Al-Libi's story will no doubt be that he decided to fabricate in order to get better treatment and avoid harsh punishment. He clearly lied. We just don't know when. Did he lie when he first said that al-Qa'ida members received training in Iraq or did he lie when he said they did not? In my mind, either case might still be true. Perhaps, early on, he was under pressure, assumed his interrogators already knew the story, and sang away. After time passed and it became clear that he would not be harmed, he might have changed his story to butt the minds of his captors. Al-Qa'ida operatives are trained to do just that. A recantation would restore his stature as someone who had successfully confounded the enemy. The fact is, we don't know which story is true, and since we don't know, we can assume nothing. In 2006, the Bush Administration announced that it was transferring high-value al-Qaeda detainees from CIA secret prisons so they could be put on trial by military commissions. However, the Administration was conspicuously silent about al-Libi. In December 2014, it was revealed that he had been transferred to the Guantanamo Bay detention camp in 2003 and transferred to Morocco on March 27, 2004. Noman Benotman, a former Mujahideen who knew Libi, told Newsweek that during a recent trip to Tripoli, he met with a senior Libyan government official who confirmed to him that Libi had been transferred to Libya and was being held in prison there. He was suffering from tuberculosis. On May 10, 2009, the English language edition of the Libyan newspaper Ennahar reported that the government said that Al-Libi had been repatriated to Libyan custody in 2006, and had recently committed suicide by hanging. It attributed the information to another newspaper, Oea. Ennahar reported Al-Libi's real name was Ali Mohamed Abdul Aziz Al-Fakheri. It stated he was 46 years old, and had been allowed visits with international human rights workers from Human Rights Watch. The story was widely reported by other media outlets. Al-Libi had been visited in April 2009 by a team from Human Rights Watch. His sudden death so soon after this visit has led human rights organisations and Islamic groups to question whether it was truly a suicide. Clive Stafford Smith, Legal Director of the UK branch of the human rights group Reprieve, said, "We are told that al-Libi committed suicide in his Libyan prison. If this is true it would be because of his torture and abuse. If false, it may reflect a desire to silence one of the greatest embarrassments to the Bush administration." Hafed Al-Ghwell, a Libya expert and director of communications at the Dubai campus of Harvard Kennedy School, commented: This is a regime with a long history of killing people in jail and then claiming it was suicide. My guess is Libya has seen the winds of change in America and wanted to bury this man before international organisations start demanding access to him. On June 19, 2009, Andy Worthington published new information on al-Libi's death. Worthington gave a detailed timeline of Al Libi's last years. The head of the Washington office of Human Rights Watch said al-Libi was "Exhibit A" in hearings on the relationship between pre-Iraq War false intelligence and torture. Confirmation of al-Libi's location came two weeks prior to his death. An independent investigation of his death has been requested by Human Rights Watch. On October 4, 2009, Reuters reported that Ayman Al Zawahiri, the head of al-Qaeda, had asserted that Libya had caused al-Libi's death through torture.
[ { "paragraph_id": 0, "text": "Ibn al-Shaykh al-Libi (Arabic: إبْنُ ٱلشَّيْخِ اللّيبي; born Ali Mohamed Abdul Aziz al-Fakheri; 1963 – May 10, 2009) was a Libyan national captured in Afghanistan in November 2001 after the fall of the Taliban; he was interrogated by American and Egyptian forces. The information he gave under torture to Egyptian authorities was cited by the George W. Bush Administration in the months preceding its 2003 invasion of Iraq as evidence of a connection between Saddam Hussein and al-Qaeda. That information was frequently repeated by members of the Bush Administration, although reports from both the Central Intelligence Agency (CIA) and the Defense Intelligence Agency (DIA) strongly questioned its credibility, suggesting that al-Libi was \"intentionally misleading\" interrogators.", "title": "" }, { "paragraph_id": 1, "text": "In 2006, the United States transferred al-Libi to Libya, where he was imprisoned by the government. He was reported to have tuberculosis. On May 19, 2009, the government reported that he had recently committed suicide in prison. Human Rights Watch, whose representatives had recently visited him, called for an investigation into the circumstances of his death; The New York Times reported that Ayman al-Zawahiri had asserted that Libya had tortured al-Libi to death.", "title": "" }, { "paragraph_id": 2, "text": "In Afghanistan, al-Libi led the Al Khaldan training camp, where Zacarias Moussaoui and Ahmed Ressam trained for attacks in the United States. An associate of Abu Zubaydah, al-Libi had his assets frozen by the U.S. government following the September 11 attacks; on September 26, 2002, the U.S. government published a list of terrorists who were covered by this restriction.", "title": "Training camp director" }, { "paragraph_id": 3, "text": "The Uyghur Turkistan Islamic Party's \"Islamic Turkistan\" magazine in its 5th edition published an obituary of its member Turghun (Ibn Umar al Turkistani) speaking of his time training at the Al Khaldan training camp and his meeting with Ibn al-Shaykh al-Libi. The Uyghurs in Afghanistan fought against the American bombing and the Northern Alliance after the September 11, 2001, attacks. Ibn Umar died fighting against Americans at the Qalai Jangi prison riot.", "title": "Training camp director" }, { "paragraph_id": 4, "text": "Al-Libi was captured by Pakistani officials in November 2001, as he attempted to flee Afghanistan following the collapse of the Taliban after the 2001 U.S. invasion of Afghanistan, and was transferred to the US military in January 2002.", "title": "Training camp director" }, { "paragraph_id": 5, "text": "Department of Defense spokesmen used to routinely describe the Khaldan training camp as an al-Qaeda training camp, and Al-Libi and Abu Zubaydah as senior members of al-Qaeda. But, during testimony at their Combatant Status Review Tribunals, several Guantanamo captives, including Zubaydah, described the Khaldan camp as having been run by a rival jihadist organization – one that did not support attacking civilians.", "title": "Training camp director" }, { "paragraph_id": 6, "text": "Al-Libi was turned over to the FBI and held at Bagram Air Base. When talking to the FBI interrogators Russell Fincher and Marty Mahon, he seemed \"genuinely friendly\" and spoke chiefly in English, calling for a translator only when necessary. He seemed to bond with Fincher, a devout Christian, and the two prayed together and discussed religion at length.", "title": "Cooperation with the FBI" }, { "paragraph_id": 7, "text": "Al-Libi told the interrogators details about Richard Reid, a British citizen who had joined al-Qaeda and trained to carry out a suicide bombing of an airliner, which he unsuccessfully attempted on December 22, 2001. Al-Libi agreed to continue cooperating if the United States would allow his wife and her family to emigrate, while he was prosecuted within the American legal system.", "title": "Cooperation with the FBI" }, { "paragraph_id": 8, "text": "The CIA asked President Bush for permission to take al-Libi into their own custody and rendition him to a foreign country for more \"tough guy\" questioning, and were granted permission. They \"simply came and took al-Libi away from the FBI.\" One CIA officer was heard telling their new prisoner that \"You know where you are going. Before you get there, I am going to find your mother and fuck her\".", "title": "In CIA custody" }, { "paragraph_id": 9, "text": "In the second week of January 2002, al-Libi was flown to the USS Bataan in the northern Arabian Sea, a ship being used to hold eight other notable prisoners, including John Walker Lindh. He was subsequently transferred to Egyptian interrogators.", "title": "In CIA custody" }, { "paragraph_id": 10, "text": "According to The Washington Post,", "title": "Information provided" }, { "paragraph_id": 11, "text": "Under questioning, al-Libi provided the CIA with intelligence about an alleged plot to blow up the U.S. Embassy in Yemen with a truck bomb and pointed officials in the direction of Abu Zubaydah, a top al Qaeda leader known to have been involved in the Sept. 11 plot.", "title": "Information provided" }, { "paragraph_id": 12, "text": "On September 15, 2002, Time published an article that detailed the CIA interrogations of Omar al-Faruq. It said,", "title": "Information provided" }, { "paragraph_id": 13, "text": "On Sept. 9, according to a secret CIA summary of the interview, al-Faruq confessed that he was, in fact, al-Qaeda's senior representative in Southeast Asia. Then came an even more shocking confession: according to the CIA document, al-Faruq said two senior al-Qaeda officials, Abu Zubaydah and Ibn al-Shaykh al-Libi, had ordered him to 'plan large-scale attacks against U.S. interests in Indonesia, Malaysia, the Philippines, Singapore, Thailand, Taiwan, Vietnam and Cambodia.'", "title": "Information provided" }, { "paragraph_id": 14, "text": "Al-Libi has been identified as a principal source of faulty prewar intelligence regarding chemical weapons training between Iraq and al-Qaeda that was used by the Bush Administration to justify the invasion of Iraq. Specifically, he told interrogators that Iraq provided training to al-Qaeda in the area of \"chemical and biological weapons\". In Cincinnati in October 2002, Bush informed the public: \"Iraq has trained al Qaeda members in bomb making and poisons and gases.\"", "title": "Information provided" }, { "paragraph_id": 15, "text": "This claim was repeated several times in the run-up to the war, including in then-Secretary of State Colin Powell's speech to the U.N Security Council on February 5, 2003, which concluded with a long recitation of the information provided by al-Libi. Powell's speech was made less than a month after a then-classified CIA report concluded that the information provided by al-Libi was unreliable, and about a year after a DIA report concluded the same thing.", "title": "Information provided" }, { "paragraph_id": 16, "text": "Al-Libi recanted these claims in January 2004 after U.S. interrogators presented \"new evidence from other detainees that cast doubt on his claims\", according to Newsweek. The DIA concluded in February 2002 that al-Libi deliberately misled interrogators, in what the CIA called an \"attempt to exaggerate his importance\". Some speculate that his reason for giving disinformation was in order to draw the U.S. into an attack on Iraq—Islam's \"weakest\" state; a remark attributed to al-Libi—which al-Qaeda believes will lead to a global jihad. Others, including al-Libi himself, have insisted that he gave false information due to the use of torture (so-called \"enhanced interrogation techniques\").", "title": "Information provided" }, { "paragraph_id": 17, "text": "An article published in the November 5, 2005, The New York Times quoted two paragraphs of a Defense Intelligence Agency report, declassified upon request by Senator Carl Levin, that expressed doubts about the results of al-Libi's interrogation in February 2002.", "title": "Information provided" }, { "paragraph_id": 18, "text": "Al-Libi told a foreign intelligence service that:", "title": "Information provided" }, { "paragraph_id": 19, "text": "Iraq — acting on the request of al-Qa'ida militant Abu Abdullah, who was Muhammad Atif's emissary — agreed to provide unspecified chemical or biological weapons training for two al-Qa'ida associates beginning in December 2000. The two individuals departed for Iraq but did not return, so al-Libi was not in a position to know if any training had taken place.", "title": "Information provided" }, { "paragraph_id": 20, "text": "The September 2002 version of Iraqi Support for Terrorism stated that al-Libi said Iraq had \"provided\" chemical and biological weapons training for two al-Qaeda associates in 2000, but also stated that al-Libi \"did not know the results of the training.\"", "title": "Information provided" }, { "paragraph_id": 21, "text": "The 2006 Senate Report on Pre-war Intelligence on Iraq stated that \"Although DIA coordinated on CIA's Iraqi Support for Terrorism paper, DIA analysis preceding that assessment was more skeptical of the al-Libi reporting.\" In July 2002, DIA assessed", "title": "Information provided" }, { "paragraph_id": 22, "text": "It is plausible al-Qa'ida attempted to obtain CB assistance from Iraq and Ibn al-Shaykh is sufficiently senior to have access to such sensitive information. However, Ibn al-Shaykh's information lacks details concerning the individual Iraqis involved, the specific CB materials associated with the assistance and the location where the alleged training occurred. The information is also second hand, and not derived from Ibn al-Shaykh's personal experience.", "title": "Information provided" }, { "paragraph_id": 23, "text": "The Senate report also states \"According to al-Libi, after his decision to fabricate information for debriefers, he 'lied about being a member of al-Qa'ida. Although he considered himself close to, but not a member of, al-Qa'ida, he knew enough about the senior members, organization and operations to claim to be a member.'\"", "title": "Information provided" }, { "paragraph_id": 24, "text": "On September 8, 2006, the United States Senate Select Committee on Intelligence released \"Phase II\" of its report on prewar intelligence on Iraq. Conclusion 3 of the report states the following:", "title": "Senate Reports on Pre-war Intelligence on Iraq" }, { "paragraph_id": 25, "text": "Postwar findings support the DIA February 2002 assessment that Ibn al-Shaykh al-Libi was likely intentionally misleading his debriefers when he said that Iraq provided two al-Qa'ida associates with chemical and biological weapons (CBW) training in 2000 ... Postwar findings do not support the CIA's assessment that his reporting was credible ... No postwar information has been found that indicates CBW training occurred and the detainee who provided the key prewar reporting about this training recanted his claims after the war ... CIA's January 2003 version of Iraqi Support for Terrorism described al-Libi's reporting for CBW training \"credible\", but noted that the individuals who traveled to Iraq for CBW training had not returned, so al-Libi was not in position to know if the training had taken place ... In January 2004, al-Libi recanted his allegations about CBW training and many of his other claims about Iraq's links to al Qa'ida. He told debriefers that, to the best of his knowledge, al-Qa'ida never sent any individuals into Iraq for any kind of support in chemical or biological weapons. Al-libi told debriefers that he fabricated information while in U.S. custody to receive better treatment and in response to threats of being transferred to a foreign intelligence service which he believed would torture him ... He said that later, while he was being debriefed by a (REDACTED) foreign intelligence service, he fabricated more information in response to physical abuse and threats of torture. The foreign government service denies using any pressure during al-Libi's interrogation. In February 2004, the CIA reissued the debriefing reports from al-Libi to note that he had recanted information. A CIA officer explained that while CIA believes al-Libi fabricated information, the CIA cannot determine whether, or what portions of, the original statements or the later recants are true or false.", "title": "Senate Reports on Pre-war Intelligence on Iraq" }, { "paragraph_id": 26, "text": "On June 11, 2008, Newsweek published an account of material from a \"previously undisclosed CIA report written in the summer of 2002\". The article reported that on August 7, 2002, CIA analysts had drafted a high-level report that expressed serious doubts about the information flowing from al-Libi's interrogation. The information that al-Libi acknowledged being a member of al-Qaeda's executive council was not supported by other sources. According to al-Libi, in Egypt he was locked in a tiny box less than 20 inches high and held for 17 hours and after being let out he was thrown to the floor and punched for 15 minutes. According to CIA operational cables, only then did he tell his \"fabricated\" story about al-Qaeda members being dispatched to Iraq.", "title": "Senate Reports on Pre-war Intelligence on Iraq" }, { "paragraph_id": 27, "text": "In November 2006, a Moroccan using the pseudonym Omar Nasiri, having infiltrated al-Qaeda in the 1990s, wrote the book, Inside the Jihad: My Life with al Qaeda, a Spy's story. In the book, Nasiri claims that al-Libi deliberately planted information to encourage the U.S. to invade Iraq. In an interview with BBC2's Newsnight, Nasiri said Libi \"needed the conflict in Iraq because months before I heard him telling us when a question was asked in the mosque after the prayer in the evening, where is the best country to fight the jihad?\" Nasiri said that Libi had identified Iraq as the \"weakest\" Muslim country. He suggested to Newsnight that al-Libi wanted to overthrow Saddam and use Iraq as a jihadist base. Nasiri describes al-Libi as one of the leaders at the Afghan camp, and characterizes him as \"brilliant in every way.\" He said that learning how to withstand interrogations and supply false information was a key part of the training in the camps. Al-Libi \"knew what his interrogators wanted, and he was happy to give it to them. He wanted to see Saddam toppled even more than the Americans did.\"", "title": "Book: Inside the Jihad" }, { "paragraph_id": 28, "text": "In April 2007, former Director of Central Intelligence George Tenet released his memoir titled At the Center of the Storm: My Years at the CIA. With regard to al-Libi, Tenet writes the following:", "title": "Book: At the Center of the Storm" }, { "paragraph_id": 29, "text": "We believed that al-Libi was withholding critical threat information at the time, so we transferred him to a third country for further debriefing. Allegations were made that we did so knowing that he would be tortured, but this is false. The country in question understood and agreed that they would hold al-Libi for a limited period. In the course of questioning while he was in U.S. custody in Afghanistan, al-Libi made initial references to possible al-Qa'ida training in Iraq. He offered up information that a militant known as Abu Abdullah had told him that at least three times between 1997 and 2000, the now-deceased al-Qa'ida leader Mohammad Atef had sent Abu Abdullah to Iraq to seek training in poisons and mustard gas. Another senior al-Qa'ida detainee told us that Mohammad Atef was interested in expanding al-Qa-ida's ties to Iraq, which, in our eyes, added credibility to the reporting. Then, shortly after the Iraq war got under way, al-Libi recanted his story. Now, suddenly, he was saying that there was no such cooperative training. Inside the CIA, there was sharp division on his recantation. It led us to recall his reporting, and here is where the mystery begins. Al-Libi's story will no doubt be that he decided to fabricate in order to get better treatment and avoid harsh punishment. He clearly lied. We just don't know when. Did he lie when he first said that al-Qa'ida members received training in Iraq or did he lie when he said they did not? In my mind, either case might still be true. Perhaps, early on, he was under pressure, assumed his interrogators already knew the story, and sang away. After time passed and it became clear that he would not be harmed, he might have changed his story to butt the minds of his captors. Al-Qa'ida operatives are trained to do just that. A recantation would restore his stature as someone who had successfully confounded the enemy. The fact is, we don't know which story is true, and since we don't know, we can assume nothing.", "title": "Book: At the Center of the Storm" }, { "paragraph_id": 30, "text": "In 2006, the Bush Administration announced that it was transferring high-value al-Qaeda detainees from CIA secret prisons so they could be put on trial by military commissions. However, the Administration was conspicuously silent about al-Libi. In December 2014, it was revealed that he had been transferred to the Guantanamo Bay detention camp in 2003 and transferred to Morocco on March 27, 2004.", "title": "Repatriation to Libya and death" }, { "paragraph_id": 31, "text": "Noman Benotman, a former Mujahideen who knew Libi, told Newsweek that during a recent trip to Tripoli, he met with a senior Libyan government official who confirmed to him that Libi had been transferred to Libya and was being held in prison there. He was suffering from tuberculosis.", "title": "Repatriation to Libya and death" }, { "paragraph_id": 32, "text": "On May 10, 2009, the English language edition of the Libyan newspaper Ennahar reported that the government said that Al-Libi had been repatriated to Libyan custody in 2006, and had recently committed suicide by hanging. It attributed the information to another newspaper, Oea. Ennahar reported Al-Libi's real name was Ali Mohamed Abdul Aziz Al-Fakheri. It stated he was 46 years old, and had been allowed visits with international human rights workers from Human Rights Watch. The story was widely reported by other media outlets.", "title": "Repatriation to Libya and death" }, { "paragraph_id": 33, "text": "Al-Libi had been visited in April 2009 by a team from Human Rights Watch. His sudden death so soon after this visit has led human rights organisations and Islamic groups to question whether it was truly a suicide. Clive Stafford Smith, Legal Director of the UK branch of the human rights group Reprieve, said, \"We are told that al-Libi committed suicide in his Libyan prison. If this is true it would be because of his torture and abuse. If false, it may reflect a desire to silence one of the greatest embarrassments to the Bush administration.\" Hafed Al-Ghwell, a Libya expert and director of communications at the Dubai campus of Harvard Kennedy School, commented:", "title": "Repatriation to Libya and death" }, { "paragraph_id": 34, "text": "This is a regime with a long history of killing people in jail and then claiming it was suicide. My guess is Libya has seen the winds of change in America and wanted to bury this man before international organisations start demanding access to him.", "title": "Repatriation to Libya and death" }, { "paragraph_id": 35, "text": "On June 19, 2009, Andy Worthington published new information on al-Libi's death. Worthington gave a detailed timeline of Al Libi's last years.", "title": "Repatriation to Libya and death" }, { "paragraph_id": 36, "text": "The head of the Washington office of Human Rights Watch said al-Libi was \"Exhibit A\" in hearings on the relationship between pre-Iraq War false intelligence and torture. Confirmation of al-Libi's location came two weeks prior to his death. An independent investigation of his death has been requested by Human Rights Watch.", "title": "Repatriation to Libya and death" }, { "paragraph_id": 37, "text": "On October 4, 2009, Reuters reported that Ayman Al Zawahiri, the head of al-Qaeda, had asserted that Libya had caused al-Libi's death through torture.", "title": "Repatriation to Libya and death" } ]
Ibn al-Shaykh al-Libi was a Libyan national captured in Afghanistan in November 2001 after the fall of the Taliban; he was interrogated by American and Egyptian forces. The information he gave under torture to Egyptian authorities was cited by the George W. Bush Administration in the months preceding its 2003 invasion of Iraq as evidence of a connection between Saddam Hussein and al-Qaeda. That information was frequently repeated by members of the Bush Administration, although reports from both the Central Intelligence Agency (CIA) and the Defense Intelligence Agency (DIA) strongly questioned its credibility, suggesting that al-Libi was "intentionally misleading" interrogators. In 2006, the United States transferred al-Libi to Libya, where he was imprisoned by the government. He was reported to have tuberculosis. On May 19, 2009, the government reported that he had recently committed suicide in prison. Human Rights Watch, whose representatives had recently visited him, called for an investigation into the circumstances of his death; The New York Times reported that Ayman al-Zawahiri had asserted that Libya had tortured al-Libi to death.
2002-01-05T18:58:34Z
2023-10-07T00:33:51Z
[ "Template:Short description", "Template:Quote", "Template:Dead link", "Template:HighValue", "Template:Use mdy dates", "Template:Lang-ar", "Template:Cite magazine", "Template:Infobox War on Terror detainee", "Template:CIAPrisons", "Template:Al-Qaeda", "Template:Cite web", "Template:Spaced ndash", "Template:Blockquote", "Template:Cite quote", "Template:Reflist", "Template:Cite book", "Template:Cite news", "Template:Cite press release", "Template:Controversies surrounding people captured during the War on Terror" ]
https://en.wikipedia.org/wiki/Ibn_al-Shaykh_al-Libi
15,478
IDF
IDF or idf may refer to:
[ { "paragraph_id": 0, "text": "IDF or idf may refer to:", "title": "" } ]
IDF or idf may refer to:
2002-02-15T12:42:46Z
2023-12-18T05:32:18Z
[ "Template:Disambiguation", "Template:Wiktionary", "Template:TOC right", "Template:Srt" ]
https://en.wikipedia.org/wiki/IDF
15,487
International Red Cross and Red Crescent Movement
The organized International Red Cross and Red Crescent Movement is a humanitarian movement with approximately 16 million volunteers, members, and staff worldwide. It was founded to protect human life and health, to ensure respect for all human beings, and to prevent and alleviate human suffering. Within it there are three distinct organisations that are legally independent from each other, but are united within the movement through common basic principles, objectives, symbols, statutes, and governing organisations. Until the middle of the nineteenth century, there were no organized or well-established army nursing systems for casualties, nor safe or protected institutions, to accommodate and treat those who were wounded on the battlefield. A devout Calvinist, the Swiss businessman Jean-Henri Dunant, traveled to Italy to meet then-French emperor Napoleon III in June 1859 with the intention of discussing difficulties in conducting business in Algeria, which at that time was occupied by France. He arrived in the small town of Solferino on the evening of 24 June after the Battle of Solferino, an engagement in the Austro-Sardinian War. In a single day, about 40,000 soldiers on both sides died or were left wounded on the field. Dunant was shocked by the terrible aftermath of the battle, the suffering of the wounded soldiers, and the near-total lack of medical attendance and basic care. He completely abandoned the original intent of his trip and for several days he devoted himself to helping with the treatment and care for the wounded. He took point in organizing an overwhelming level of relief assistance with the local villagers to aid without discrimination. Back at his home in Geneva, he decided to write a book entitled A Memory of Solferino, which he published using his own money in 1862. He sent copies of the book to leading political and military figures throughout Europe, and people he thought could help him make a change. His book included vivid descriptions of his experiences in Solferino in 1859, and he explicitly advocated the formation of national voluntary relief organizations to help nurse wounded soldiers in the case of war, inspired by Christian teaching regarding social responsibility and his experience after the battlefield of Solferino. He called for the development of an international treaty to guarantee the protection of medics and field hospitals for soldiers wounded on the battlefield. In 1863, Gustave Moynier, a Geneva lawyer and president of the Geneva Society for Public Welfare, received a copy of Dunant's book and introduced it for discussion at a meeting of that society. As a result of this initial discussion, the society established an investigatory commission to examine the feasibility of Dunant's suggestions and eventually to organize an international conference about their possible implementation. The members of this committee, which has subsequently been referred to as the "Committee of the Five", aside from Dunant and Moynier were physician Louis Appia, who had significant experience working as a field surgeon; Appia's friend and colleague Théodore Maunoir, from the Geneva Hygiene and Health Commission; and Guillaume-Henri Dufour, a Swiss army general of great renown. Eight days later, the five men decided to rename the committee to the "International Committee for Relief to the Wounded". From 26 to 29 October 1863, the international conference organized by the committee was held in Geneva to develop possible measures to improve medical services on the battlefield. The conference was attended by 36 individuals: eighteen official delegates from national governments, six delegates from non-governmental organizations, seven non-official foreign delegates, and the five members of the International Committee. The states and kingdoms represented by official delegates were: Austrian Empire, Grand Duchy of Baden, Kingdom of Bavaria, French Empire, Kingdom of Hanover, Grand Duchy of Hesse, Kingdom of Italy, Kingdom of the Netherlands, Kingdom of Prussia, Russian Empire, Kingdom of Saxony, Kingdom of Spain, United Kingdoms of Sweden and Norway, and United Kingdom of Great Britain and Ireland. Among the proposals written in the final resolutions of the conference, adopted on 29 October 1863, were: Only a year later, the Swiss government invited the governments of all European countries, as well as the United States, the Empire of Brazil and the Mexican Empire to attend an official diplomatic conference. Sixteen countries sent a total of 26 delegates to Geneva. On 22 August 1864, the conference adopted the first Geneva Convention "for the Amelioration of the Condition of the Wounded in Armies in the Field". Representatives of 12 states and kingdoms signed the convention: The convention contained ten articles, establishing for the first time legally binding rules guaranteeing neutrality and protection for wounded soldiers, field medical personnel, and specific humanitarian institutions in an armed conflict. Directly following the establishment of the Geneva Convention, the first national societies were founded in Belgium, Denmark, France, Oldenburg, Prussia, Spain, and Württemberg. Also in 1864, Louis Appia and Charles van de Velde, a captain of the Dutch Army, became the first independent and neutral delegates to work under the symbol of the Red Cross in an armed conflict. The Ottoman government ratified this treaty on 5 July 1865. The Turkish Red Crescent organization was founded in the Ottoman Empire in 1868, partly in response to the experience of the Crimean War (1853–1856), in which disease overshadowed battle as the main cause of death and suffering among Turkish soldiers. It was the first Red Crescent society of its kind and one of the most important charity organizations in the Muslim world. In 1867, the first International Conference of National Aid Societies for the Nursing of the War Wounded was convened. Also in 1867, Jean-Henri Dunant was forced to declare bankruptcy due to business failures in Algeria, partly because he had neglected his business interests during his tireless activities for the International Committee. The controversy surrounding Dunant's business dealings and the resulting negative public opinion, combined with an ongoing conflict with Gustave Moynier, led to Dunant's expulsion from his position as a member and secretary. He was charged with fraudulent bankruptcy and a warrant for his arrest was issued. Thus, he was forced to leave Geneva and never returned to his home city. In the following years, national societies were founded in nearly every country in Europe. The project resonated well with patriotic sentiments that were on the rise in the late-nineteenth-century, and national societies were often encouraged as signifiers of national moral superiority. In 1876, the committee adopted the name "International Committee of the Red Cross" (ICRC), which is still its official designation today. Five years later, the American Red Cross was founded through the efforts of Clara Barton. More and more countries signed the Geneva Convention and began to respect it in practice during armed conflicts. In a rather short period of time, the Red Cross gained huge momentum as an internationally respected movement, and the national societies became increasingly popular as a venue for volunteer work. When the first Nobel Peace Prize was awarded in 1901, the Norwegian Nobel Committee opted to give it jointly to Jean-Henri Dunant and Frédéric Passy, a leading international pacifist. More significant than the honor of the prize itself, this prize marked the overdue rehabilitation of Jean-Henri Dunant and represented a tribute to his key role in the formation of the Red Cross. Dunant died nine years later in the small Swiss health resort of Heiden. Only two months earlier his long-standing adversary Gustave Moynier had also died, leaving a mark in the history of the committee as its longest-serving president ever. In 1906, the 1864 Geneva Convention was revised for the first time. One year later, the Hague Convention X, adopted at the Second International Peace Conference in The Hague, extended the scope of the Geneva Convention to naval warfare. Shortly before the beginning of the First World War in 1914, 50 years after the foundation of the ICRC and the adoption of the first Geneva Convention, there were already 45 national relief societies throughout the world. The movement had extended itself beyond Europe and North America to Central and South America (Argentine Republic, the United States of Brazil, the Republic of Chile, the Republic of Cuba, the United Mexican States, the Republic of Peru, the Republic of El Salvador, the Oriental Republic of Uruguay, the United States of Venezuela), Asia (the Republic of China, the Empire of Japan and the Kingdom of Siam), and Africa (Union of South Africa). With the outbreak of World War I, the ICRC found itself confronted with enormous challenges that it could handle only by working closely with the national Red Cross societies. Red Cross nurses from around the world, including the United States and Japan, came to support the medical services of the armed forces of the European countries involved in the war. On 15 August 1914, immediately after the start of the war, the ICRC set up its International Prisoners-of-War Agency (IPWA) to trace POWs and to re-establish communications with their respective families. The Austrian writer and pacifist Stefan Zweig described the situation at the Geneva headquarters of the ICRC: Hardly had the first blows been struck when cries of anguish from all lands began to be heard in Switzerland. Thousands who were without news of fathers, husbands, and sons in the battlefields, stretched despairing arms into the void. By hundreds, by thousands, by tens of thousands, letters and telegrams poured into the little House of the Red Cross in Geneva, the only international rallying point that still remained. Isolated, like stormy petrels, came the first inquiries for missing relatives; then these inquiries themselves became a storm. The letters arrived in sackfuls. Nothing had been prepared for dealing with such an inundation of misery. The Red Cross had no space, no organization, no system, and above all no helpers. However, by the end of the year, the Agency already had some 1,200 volunteers who worked in the Musée Rath of Geneva, amongst them the French writer and pacifist Romain Rolland. When he was awarded the Nobel Prize for Literature for 1915, he donated half of the prize money to the Agency. Most of the staff were women, some of whom – like Marguerite van Berchem, Marguerite Cramer and Suzanne Ferrière – served in high positions as pioneers of gender equality in an organisation dominated by men. By the end of the war, the Agency had transferred about 20 million letters and messages, 1.9 million parcels, and about 18 million Swiss francs in monetary donations to POWs of all affected countries. Furthermore, due to the intervention of the Agency, about 200,000 prisoners were exchanged between the warring parties, released from captivity and returned to their home country. The organizational card index of the Agency accumulated about 7 million records from 1914 to 1923. The card index led to the identification of about 2 million POWs and the ability to contact their families. The complete index is on loan today from the ICRC to the International Red Cross and Red Crescent Museum in Geneva. The right to access the index is still strictly restricted to the ICRC. During the entire war, the ICRC monitored warring parties' compliance with the Geneva Conventions of the 1907 revision and forwarded complaints about violations to the respective country. When chemical weapons were used in this war for the first time in history, the ICRC mounted a vigorous protest against their use. Even without having a mandate from the Geneva Conventions, the ICRC tried to ameliorate the suffering of civil populations. In territories that were officially designated as "occupied territories", the ICRC could assist the civilian population on the basis of the Hague Convention's "Laws and Customs of War on Land" of 1907. This convention was also the legal basis for the ICRC's work for prisoners of war. In addition to the work of the International Prisoner-of-War Agency as described above, this included inspection visits to POW camps. A total of 524 camps throughout Europe were visited by 41 delegates from the ICRC through the end of the war. Between 1916 and 1918, the ICRC published a number of postcards with scenes from the POW camps. The pictures showed the prisoners in day-to-day activities such as the distribution of letters from home. The intention of the ICRC was to provide the families of the prisoners with some hope and solace and to alleviate their uncertainties about the fate of their loved ones. After the end of the war, between 1920 and 1922, the ICRC organized the return of about 500,000 prisoners to their home countries. In 1920, the task of repatriation was handed over to the newly founded League of Nations, which appointed the Norwegian diplomat and scientist Fridtjof Nansen as its "High Commissioner for Repatriation of the War Prisoners". His legal mandate was later extended to support and care for war refugees and displaced persons when his office became that of the League of Nations "High Commissioner for Refugees". Nansen, who invented the Nansen passport for stateless refugees and was awarded the Nobel Peace Prize in 1922, appointed two delegates from the ICRC as his deputies. A year before the end of the war, the ICRC received the 1917 Nobel Peace Prize for its outstanding wartime work. It was the only Nobel Peace Prize awarded in the period from 1914 to 1918. In 1923, the International Committee of the Red Cross adopted a change in its policy regarding the selection of new members. Until then, only citizens from the city of Geneva could serve in the committee. This limitation was expanded to include all Swiss citizens. As a direct consequence of World War I, a treaty was adopted in 1925 which outlawed the use of suffocating or poisonous gases and biological agents as weapons. Four years later, the original Convention was revised and the second Geneva Convention "relative to the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea" was established. The events of World War I and the respective activities of the ICRC significantly increased the reputation and authority of the Committee among the international community and led to an extension of its competencies. As early as in 1934, a draft proposal for an additional convention for the protection of the civil population in occupied territories during an armed conflict was adopted by the International Red Cross Conference. Unfortunately, most governments had little interest in implementing this convention, and it was thus prevented from entering into force before the beginning of World War II. The Red Cross' response to the Holocaust has been the subject of significant controversy and criticism. As early as May 1944, the ICRC was criticized for its indifference to Jewish suffering and death—criticism that intensified after the end of the war, when the full extent of the Holocaust became undeniable. One defense to these allegations is that the Red Cross was trying to preserve its reputation as a neutral and impartial organization by not interfering with what was viewed as a German internal matter. The Red Cross also considered its primary focus to be prisoners of war whose countries had signed the Geneva Convention. The Geneva Conventions in their 1929 revision formed the legal basis of the work of the ICRC during World War II. The activities of the committee were similar to those during World War I: visiting and monitoring POW camps, organizing relief assistance for civilian populations, and administering the exchange of messages regarding prisoners and missing persons. By the end of the war, 179 delegates had conducted 12,750 visits to POW camps in 41 countries. The Central Information Agency on Prisoners-of-War (Agence centrale des prisonniers de guerre) had a staff of 3,000, the card index tracking prisoners contained 45 million cards, and 120 million messages were exchanged by the Agency. One major obstacle was that the Nazi-controlled German Red Cross refused to cooperate with the Geneva statutes, including blatant violations such as the deportation of Jews from Germany, and the mass murders conducted in the Nazi concentration camps. Two other main parties to the conflict, the Soviet Union and Japan, were not party to the 1929 Geneva Conventions and were not legally required to follow the rules of the conventions. During the war, the ICRC was unable to obtain an agreement with Nazi Germany about the treatment of detainees in concentration camps, and the ICRC eventually abandoned applying pressure, saying later it did so in order to avoid disrupting its work with POWs. The ICRC was also unable to obtain a response to reliable information about the extermination camps and the mass killing of European Jews, Roma, et al. After November 1943, the ICRC achieved permission to send parcels to concentration camp detainees with known names and locations. Because the notices of receipt for these parcels were often signed by other inmates, the ICRC managed to register the identities of about 105,000 detainees in the concentration camps and delivered about 1.1 million parcels, primarily to the concentration camps Dachau, Buchenwald, Ravensbrück, and Sachsenhausen. Maurice Rossel was sent to Berlin as a delegate of the International Red Cross; he visited Theresienstadt Ghetto in 1944. The choice of the inexperienced Rossel for this mission has been interpreted as indicative of his organization's indifference to the "Jewish problem", while his report has been described as "emblematic of the failure of the ICRC" to advocate for Jews during the Holocaust. Rossel's report was noted for its uncritical acceptance of Nazi propaganda. He erroneously stated that Jews were not deported from Theresienstadt. Claude Lanzmann recorded his experiences in 1979, producing a documentary entitled A Visitor from the Living. On 12 March 1945, ICRC president Jacob Burckhardt received a message from SS General Ernst Kaltenbrunner allowing ICRC delegates to visit the concentration camps. This agreement was bound by the condition that these delegates would have to stay in the camps until the end of the war. Ten delegates, among them Louis Haefliger (Mauthausen-Gusen), Paul Dunant (Theresienstadt), and Victor Maurer (Dachau) accepted the assignment and visited the camps. Louis Haefliger prevented the forceful eviction or blasting of Mauthausen-Gusen by alerting American troops. Friedrich Born (1903–1963), an ICRC delegate in Budapest who saved the lives of about 11,000 to 15,000 Jewish people in Hungary. Marcel Junod (1904–1961), a physician from Geneva was one of the first foreigners to visit Hiroshima after the atomic bomb was dropped. In 1944, the ICRC received its second Nobel Peace Prize. As in World War I, it received the only Peace Prize awarded during the main period of war, 1939 to 1945. At the end of the war, the ICRC worked with national Red Cross societies to organize relief assistance to those countries most severely affected. In 1948, the Committee published a report reviewing its war-era activities from 1 September 1939 to 30 June 1947. The ICRC opened its archives from World War II in 1996. On 12 August 1949, further revisions to the existing two Geneva Conventions were adopted. An additional convention "for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea", now called the second Geneva Convention, was brought under the Geneva Convention umbrella as a successor to the 1907 Hague Convention X. The 1929 Geneva convention "relative to the Treatment of Prisoners of War" may have been the second Geneva Convention from a historical point of view (because it was actually formulated in Geneva), but after 1949 it came to be called the third Convention because it came later chronologically than the Hague Convention. Reacting to the experience of World War II, the Fourth Geneva Convention, a new Convention "relative to the Protection of Civilian Persons in Time of War", was established. Also, the additional protocols of 8 June 1977 were intended to make the conventions apply to internal conflicts such as civil wars. Today, the four conventions and their added protocols contain more than 600 articles, while there were only 10 articles in the first 1864 convention. In celebration of its centennial in 1963, the ICRC, together with the League of Red Cross Societies, received its third Nobel Peace Prize. On 16 October 1990, the UN General Assembly granted the ICRC observer status for its assembly sessions and sub-committee meetings, the first observer status given to a private organization. The resolution was jointly proposed by 138 member states and introduced by the Italian ambassador, in memory of the organization's origins in the Battle of Solferino. An agreement with the Swiss government signed on 19 March 1993 affirmed the already long-standing policy of full independence of the committee from any interference by Switzerland. The agreement protects the full sanctity of all ICRC property in Switzerland including its headquarters and archive, grants members and staff legal immunity, exempts the ICRC from all taxes and fees, guarantees the protected and duty-free transfer of goods, services, and money, provides the ICRC with secure communication privileges at the same level as foreign embassies, and simplifies Committee travel in and out of Switzerland. At the end of the Cold War, the ICRC's work became more dangerous. In the 1990s, more delegates died than at any point in its history, especially when working in local and internal armed conflicts. These incidents often demonstrated a lack of respect for the rules of the Geneva Conventions and their protection symbols. Among the slain delegates were: On 27 January 2002, Palestinian Red Crescent volunteer paramedic and suicide bomber Wafa Idris was transported to Jerusalem, Israel, by a Red Crescent ambulance, whose driver was part of the plot, and killed herself while committing the Jaffa Street bombing. Idris, wearing a Red Crescent uniform, detonated a 22-pound (10 kilogram) bomb made up of TNT packed into pipes, in the center of Jerusalem outside a shoe store on the busy main shopping street Jaffa Road. The explosion she caused killed her and Pinhas Tokatli (81), and injured more than 100 others. In the 2000s, the ICRC has been active in the Afghanistan conflict areas and has set up six physical rehabilitation centers to help land mine victims. Their support extends to the national and international armed forces, civilians and the armed opposition. They regularly visit detainees under the custody of the Afghan government and the international armed forces, but have also occasionally had access since 2009 to people detained by the Taliban. They have provided basic first aid training and aid kits to both the Afghan security forces and Taliban members because, according to an ICRC spokesperson, "ICRC's constitution stipulates that all parties harmed by warfare will be treated as fairly as possible". In August 2021, when NATO-led forces retreated from Afghanistan, the ICRC decided to remain in the country to continue its mission to assist and protect victims of conflict. Since June 2021, ICRC-supported facilities have treated more than 40,000 people wounded during armed confrontations there. Among the ten largest ICRC deployments worldwide has been the mission in Ukraine, where the organisation has been active since 2014, working closely with the Ukrainian Red Cross Society. At first, the ICRC was active primarily in the disputed regions of Donbas and Donetsk, assisting persons injured by armed confrontations there. When Russia invaded Ukraine on 24 February 2022, the fighting moved to more populated areas in East, North, and South Ukraine. The head of the ICRC delegation in Kyiv warned on 26 February 2022 that neighborhoods of major cities were becoming the frontline with significant consequences for their populations, including children, the sick, and elderly. The ICRC urgently called on all parties to the conflict not to forget their obligations under international humanitarian law to ensure the protection of the civilian population and infrastructure, and respect the dignity of refugees and prisoners of war. Prior to the war that broke out in 2023, Israeli authorities required Palestinian ambulances to undergo thorough searches when passing through checkpoints, saying the policy was driven by Palestinian organizations using ambulances to transport terrorists and armaments. The Israeli Ministry of Health said that: "The Red Crescent closely cooperated with the MDA (Magen David Adom) until April 2002. At that time, the IDF claimed that Red Crescent ambulances were being used to carry terrorists. The Red Crescent personnel involved in this violation were interrogated." In October 2023, the ICRC responded to the 2023 Israel–Hamas war that has resulted in the deaths of thousands of civilians on both sides. The ICRC has called the violence "abhorrent" and implored both sides to reduce the suffering of civilians. The ICRC was in constant contact with Hamas and Israeli officials to avoid further carnage. The 2023 conflict began on 7 October 2023 with the killing, rapes, and kidnapping of over 1,400 Israelis and others by Hamas in the 2023 Hamas-led attack on Israel. Fabrizio Carboni, regional director of the ICRC and IFRC for the Near and Middle East, stressed that the taking of hostages is prohibited under international humanitarian law. The ICRC as a neutral intermediary stood ready to conduct humanitarian visits and to facilitate communications between family members and hostages with the goal for their eventual release. At the same time he spoke to the impact of the war on residents of Gaza, who were cut off from all food shipments, electricity and medical supplies, which particularly affected the functioning of local hospitals there. Inhabitants of Gaza already endured a problematic lack of drinking water before the onset of the hostilities. Nitsana Darshan-Leitner, attorney, human rights activist, and the founder of Shurat HaDin, wrote in a letter to the ICRC on November 20, 2023, that by allowing Hamas to use its ambulances to transport terrorists, the Red Crescent "plays an integral part in Hamas’s illegal conduct," and suggested that the ICRC suspend the Red Crescent from the organization. She said that Hamas had used Red Crescent ambulances for two decades, and used ambulances to transport rockets and Hamas members, sometimes disguising them as wounded. The monitoring group UN Watch wrote in a report on November 11, 2023, that the ICRC had adopted an overwhelmingly skewed approach to the Hamas-Israel war in its social media. It said that out of 187 tweets published by the main ICRC accounts on Twitter, 77% were focused on criticizing Israel, expressly or by implication, while only 7% of the ICRC tweets criticized Hamas. It also pointed out that as an example the ICRC promoted a false Hamas story that Israel attacked and "destroyed" Al-Ahli Arab Hospital, saying it was “shocked and horrified” that “hundreds were killed,” including “patients “killed in a hospital bed,” and doctors “losing their lives trying to save others" from the Israeli attack, which was all completely untrue – and then ICRC never corrected its misinformation. It also said that the ICRC repeatedly promoted the notion that Israel was responsible for the war which was in fact launched by Hamas, with its invasion and massacre of October 7, and failed to condemn Hamas’s aggression, while glossing over 11,000 rockets launched by Hamas in two months, and the displacement of 300,000 Israelis. Eli Beer, the founder of the emergency medical services organization United Hatzalah, wrote on December 12, 2023, that the ICRC had "neither the bandwidth nor the fortitude to gain access to captives taken during the massacre, or even to condemn Hamas for the endless list of war crimes it committed on October 7th." He said that it was "refusing to take decisive action to protect Israelis .. [and that it did] not provide [the hostages] with the critical medical treatment they are entitled to as prisoners of war." He also said that it "has a clear anti-Israel bent and blatantly discriminates against Jews... [and that it] did nothing for [the hostages'] release, other than picking up released hostages and driving them from point A to point B, acting as nothing more than an Uber-like transportation service for hostages." The ICRC working closely with Red Crescent partners has a neutral, independent and exclusively humanitarian mandate during such escalations of violence in the Middle East and urged all parties to protect the lives of civilians, to reduce their suffering and protect their dignity. During the violent conflict, the ICRC and the Palestinian Red Crescent Society (PRCS) provided hospitals in the Gaza strip with support through large humanitarian convoys from Egypt, and was seriously affected by numerous aerial attacks on medical facilities and ambulances. The ICRC said in November that civilians have "overwhelmingly borne the brunt" of the fighting in the Palestinian enclave and Israel so far. Israeli forces have killed over 15,200 people, including Hamas members, in a devastating bombing campaign and ground offensive. In late November, the team of the ICRC started a multi-day operation to facilitate the release and transfer of hostages held in Gaza and of Palestinian prisoners to the West Bank. In early December, US Secretary of State Antony Blinken insisted that the Red Cross delegation must have access to the remaining hostages. The ICRC is not a negotiating power but the ICRC chief had direct talks with senior Hamas leader Ismail Haniyeh in Qatar in November, demanding direct access to the remaining hostages. In 1919, representatives from the national Red Cross societies of Britain, France, Italy, Japan, and the US came together in Paris to found the "League of Red Cross Societies" (IFRC). The original idea came from Henry Davison, who was then president of the American Red Cross. This move, led by the American Red Cross, expanded the international activities of the Red Cross movement beyond the strict mission of the ICRC to include relief assistance in response to emergency situations which were not caused by war (such as man-made or natural disasters). The ARC already had great disaster relief mission experience extending back to its foundation. The formation of the League, as an additional international Red Cross organization alongside the ICRC, was not without controversy for a number of reasons. The ICRC had, to some extent, valid concerns about a possible rivalry between the two organizations. The foundation of the League was seen as an attempt to undermine the leadership position of the ICRC within the movement and to gradually transfer most of its tasks and competencies to a multilateral institution. In addition to that, all founding members of the League were national societies from countries of the Entente or from associated partners of the Entente. The original statutes of the League from May 1919 contained further regulations which gave the five founding societies a privileged status and, due to the efforts of Henry Davison, the right to permanently exclude the national Red Cross societies from the countries of the Central Powers, namely Germany, Austria, Hungary, Bulgaria and Turkey, and in addition to that the national Red Cross society of Russia. These rules were contrary to the Red Cross principles of universality and equality among all national societies, a situation which furthered the concerns of the ICRC. The first relief assistance mission organized by the League was an aid mission for the victims of a famine and subsequent typhus epidemic in Poland. Only five years after its foundation, the League had already issued 47 donation appeals for missions in 34 countries, an impressive indication of the need for this type of Red Cross work. The total sum raised by these appeals reached 685 million Swiss francs, which were used to bring emergency supplies to the victims of famines in Russia, Germany, and Albania; earthquakes in Chile, Persia, Japan, Colombia, Ecuador, Costa Rica, and Turkey; and refugee flows in Greece and Turkey. The first large-scale disaster mission of the League came after the 1923 earthquake in Japan which killed about 200,000 people and left countless more wounded and without shelter. Due to the League's coordination, the Red Cross society of Japan received goods from its sister societies reaching a total worth of about $100 million. Another important new field initiated by the League was the creation of youth Red Cross organizations within the national societies. A joint mission of the ICRC and the League in the Russian Civil War from 1917 to 1922 marked the first time the movement was involved in an internal conflict, although still without an explicit mandate from the Geneva Conventions. The League, with support from more than 25 national societies, organized assistance missions and the distribution of food and other aid goods for civil populations affected by hunger and disease. The ICRC worked with the Russian Red Cross Society and later the society of the Soviet Union, constantly emphasizing the ICRC's neutrality. In 1928, the "International Council" was founded to coordinate cooperation between the ICRC and the League, a task which was later taken over by the "Standing Commission". In the same year, a common statute for the movement was adopted for the first time, defining the respective roles of the ICRC and the League within the movement. During the Abyssinian war between Ethiopia and Italy from 1935 to 1936, the League contributed aid supplies worth about 1.7 million Swiss francs. Because the Italian fascist regime under Benito Mussolini refused any cooperation with the Red Cross, these goods were delivered solely to Ethiopia. During the war, an estimated 29 people died while being under explicit protection of the Red Cross symbol, most of them due to attacks by the Italian Army. During the civil war in Spain from 1936 to 1939 the League once again joined forces with the ICRC with the support of 41 national societies. In 1939 on the brink of the Second World War, the League relocated its headquarters from Paris to Geneva to take advantage of Swiss neutrality. In 1952, the 1928 common statute of the movement was revised for the first time. Also, the period of decolonization from 1960 to 1970 was marked by a huge jump in the number of recognized national Red Cross and Red Crescent societies. By the end of the 1960s, there were more than 100 societies around the world. On 10 December 1963, the Federation and the ICRC received the Nobel Peace Prize. In 1983, the League was renamed to the "League of Red Cross and Red Crescent Societies" to reflect the growing number of national societies operating under the Red Crescent symbol. Three years later, the seven basic principles of the movement as adopted in 1965 were incorporated into its statutes. The name of the League was changed again in 1991 to its current official designation the "International Federation of Red Cross and Red Crescent Societies". In 1997, the ICRC and the IFRC signed the Seville Agreement which further defined the responsibilities of both organizations within the movement. In 2004, the IFRC began its largest mission to date after the tsunami disaster in South Asia. More than 40 national societies have worked with more than 22,000 volunteers to bring relief to the countless victims left without food and shelter and endangered by the risk of epidemics. Altogether, there are about 80 million people worldwide who serve with the ICRC, the International Federation, and the National Societies, the majority with the latter. At the 20th International Conference in Neue Hofburg, Vienna, from 2–9 October 1965, delegates "proclaimed" seven fundamental principles which are shared by all components of the Movement, and they were added to the official statutes of the Movement in 1986. The durability and universal acceptance is a result of the process through which they came into being in the form they have. Rather than an effort to arrive at agreement, it was an attempt to discover what successful operations and organisational units, over the past 100 years, had in common. As a result, the Fundamental Principles of the Red Cross and Red Crescent were not revealed, but found – through a deliberate and participative process of discovery. That makes it even more important to note that the definition that appears for each principle is an integral part of the Principle in question and not an interpretation that can vary with time and place. The International Conference of the Red Cross and Red Crescent, which occurs once every four years, is the highest institutional body of the Movement. It gathers delegations from all of the national societies as well as from the ICRC, the IFRC and the signatory states to the Geneva Conventions. In between the conferences, the Standing Commission of the Red Cross and Red Crescent acts as the supreme body and supervises implementation of and compliance with the resolutions of the conference. In addition, the Standing Commission coordinates the cooperation between the ICRC and the IFRC. It consists of two representatives from the ICRC (including its president), two from the IFRC (including its president), and five individuals who are elected by the International Conference. The Standing Commission convenes every six months on average. Moreover, a convention of the Council of Delegates of the Movement takes place every two years in the course of the conferences of the General Assembly of the International Federation. The Council of Delegates plans and coordinates joint activities for the Movement. The official mission of the ICRC as an impartial, neutral, and independent organization is to stand for the protection of the life and dignity of victims of international and internal armed conflicts. According to the revised Seville Agreement of 2022, the ICRC is entrusted the role of "co-convener" with the national Red Cross or Crescent society in situations of international and non-international armed conflicts, internal strife and their direct results. The core tasks of the committee, which are derived from the Geneva Conventions and its own statutes, are the following: The ICRC is headquartered in the Swiss city of Geneva and has offices in over 100 countries. It has more than 22,000 staff members worldwide, about 1,400 of them working in its Geneva headquarters, 3,250 expatriate staff serving as general delegates and technical specialists, and about 17,000 locally recruited staff. According to Swiss law, the ICRC is defined as a private association. Contrary to popular belief, the ICRC is not a non-governmental organization in the most common sense of the term, nor is it an international organization. As it limits its members (a process called cooptation) to Swiss nationals only, it does not have a policy of open and unrestricted membership for individuals like other legally defined NGOs. The word "international" in its name does not refer to its membership but to the worldwide scope of its activities as defined by the Geneva Conventions. The ICRC has special privileges and legal immunities in many countries, based on national law in these countries or through agreements between the committee and respective national governments. According to its statutes, it consists of 15 to 25 Swiss-citizen members, which it coopts for a period of four years. There is no limit to the number of terms an individual member can have although a three-quarters majority of all members is required for re-election after the third term. The leading organs of the ICRC are the Directorate and the Assembly. The Directorate is the executive body of the committee. It consists of a general director and five directors in the areas of "Operations", "Human Resources", "Resources and Operational Support", "Communication", and "International Law and Cooperation within the Movement". The members of the Directorate are appointed by the Assembly to serve for four years. The Assembly, consisting of all of the members of the committee, convenes on a regular basis and is responsible for defining aims, guidelines, and strategies and for supervising the financial matters of the committee. The president of the Assembly is also the president of the committee as a whole. Furthermore, the Assembly elects a five-member Assembly Council which has the authority to decide on behalf of the full Assembly in some matters. The council is also responsible for organizing the Assembly meetings and for facilitating communication between the Assembly and the Directorate. Due to Geneva's location in the French-speaking part of Switzerland, the ICRC usually acts under its French name Comité international de la Croix-Rouge (CICR). The official symbol of the ICRC is the Red Cross on white background with the words "COMITE INTERNATIONAL GENEVE" circling the cross. The 2023 budget of the ICRC amounts to 2.5 billion Swiss francs. Most of that money comes from states, including Switzerland in its capacity as the depositary state of the Geneva Conventions, from national Red Cross societies, the signatory states of the Geneva Conventions, and from international organizations like the European Union. All payments to the ICRC are voluntary and are received as donations based on two types of appeals issued by the committee: an annual Headquarters Appeal to cover its internal costs and Emergency Appeals for its individual missions. In 2023, Ukraine is the ICRC's biggest humanitarian operation (at 316.5 million Swiss francs), followed by Afghanistan (218 million francs) and Syria (171.7 million francs). The International Federation of Red Cross and Red Crescent Societies coordinates cooperation between national Red Cross and Red Crescent societies throughout the world and supports the foundation of new national societies in countries where no official society exists. On the international stage, the IFRC organizes and leads relief assistance missions after emergencies such as natural disasters, manmade disasters, epidemics, mass refugee flights, and other emergencies. As per the 1997 Seville Agreement, the IFRC is the Lead Agency of the Movement in any emergency situation which does not take place as part of an armed conflict. The IFRC cooperates with the national societies of those countries affected – each called the Operating National Society (ONS) – as well as the national societies of other countries willing to offer assistance – called Participating National Societies (PNS). Among the 187 national societies admitted to the General Assembly of the International Federation as full members or observers, about 25–30 regularly work as PNS in other countries. The most active of those are the American Red Cross, the British Red Cross, the German Red Cross, and the Red Cross societies of Sweden and Norway. Another major mission of the IFRC which has gained attention in recent years is its commitment to work towards a codified, worldwide ban on the use of land mines and to bring medical, psychological, and social support for people injured by land mines. The tasks of the IFRC can therefore be summarized as follows: The IFRC has its headquarters in Geneva. It also runs five zone offices (Africa, Americas, Asia Pacific, Europe, Middle East-North Africa), 14 permanent regional offices and has about 350 delegates in more than 60 delegations around the world. The legal basis for the work of the IFRC is its constitution. The executive body of the IFRC is a secretariat, led by a secretary general. The secretariat is supported by five divisions including "Programme Services", "Humanitarian values and humanitarian diplomacy", "National Society and Knowledge Development" and "Governance and Management Services". The highest decision-making body of the IFRC is its General Assembly, which convenes every two years with delegates from all of the national societies. Among other tasks, the General Assembly elects the secretary general. Between the convening of General Assemblies, the Governing Board is the leading body of the IFRC. It has the authority to make decisions for the IFRC in a number of areas. The Governing Board consists of the president and the vice presidents of the IFRC, the chairpersons of the Finance and Youth Commissions, and twenty elected representatives from national societies. The symbol of the IFRC is the combination of the Red Cross (left) and Red Crescent (right) on a white background surrounded by a red rectangular frame. The main parts of the budget of the IFRC are funded by contributions from the national societies which are members of the IFRC and through revenues from its investments. The exact amount of contributions from each member society is established by the Finance Commission and approved by the General Assembly. Any additional funding, especially for unforeseen expenses for relief assistance missions, is raised by "appeals" published by the IFRC and comes for voluntary donations by national societies, governments, other organizations, corporations, and individuals. National Red Cross and Red Crescent societies exist in nearly every country in the world. Within their home country, they take on the duties and responsibilities of a national relief society as defined by International Humanitarian Law. Within the Movement, the ICRC is responsible for legally recognizing a relief society as an official national Red Cross or Red Crescent society. The exact rules for recognition are defined in the statutes of the Movement. Article 4 of these statutes contains the "Conditions for recognition of National Societies": Once a National Society has been recognized by the ICRC as a component of the International Red Cross and Red Crescent Movement (the Movement), it is in principle admitted to the International Federation of Red Cross and Red Crescent Societies in accordance with the terms defined in the Constitution and Rules of Procedure of the International Federation. Today, there are 192 National Societies recognized within the Movement and which are members of the International Federation. The most recent National Societies to have been recognized within the Movement are the Maldives Red Crescent Society (9 November 2011), the Cyprus Red Cross Society, the South Sudan Red Cross Society (12 November 2013) and, the last, the Tuvalu Red Cross Society (on 1 March 2016). Despite formal independence regarding its organizational structure and work, each national society is still bound by the laws of its home country. In many countries, national Red Cross and Red Crescent societies enjoy exceptional privileges due to agreements with their governments or specific "Red Cross Laws" granting full independence as required by the International Movement. The duties and responsibilities of a national society as defined by International Humanitarian Law and the statutes of the Movement include humanitarian aid in armed conflicts and emergency crises such as natural disasters through activities such as Restoring Family Links. Depending on their respective human, technical, financial, and organizational resources, many national societies take on additional humanitarian tasks within their home countries such as blood donation services or acting as civilian Emergency Medical Service (EMS) providers. The ICRC and the International Federation cooperate with the national societies in their international missions, especially with human, material, and financial resources and organizing on-site logistics. The Russian Red Cross supports the organisation Myvmeste, which supports the Russian Army with their 2022 Russian invasion of Ukraine through their "Everything for victory" fund. During the Hamas-Israel war, the IFRC in particular called for humanitarian access across Gaza and West Bank, the release of hostages, the protection of civilians, hospitals and humanitarian workers from indiscriminate attack and compliance with international humanitarian law to ensure its continued activites in the occupied Palestinian territories. The Red Cross, Red Crescent and Red Crystal emblems are officially recognized by the movement. De jure, the Red Lion and Sun emblem is also an official emblem, though it has fallen to disuse. Various other countries have also lobbied for alternative symbols, which have been rejected because of concerns of territorialism. The Red Cross emblem was officially approved in Geneva in 1863. The Red Cross flag is not to be confused with the Saint George's Cross depicted on the flags of England, Barcelona, Georgia, Freiburg im Breisgau, and several other places. In order to avoid this confusion the protected symbol is sometimes referred to as the "Greek Red Cross"; that term is also used in United States law to describe the Red Cross. The red cross of the Saint George cross extends to the edge of the flag, whereas the red cross on the Red Cross flag does not. The Red Cross flag is the colour-switched version of the Flag of Switzerland, in recognition of "the pioneering work of Swiss citizens in establishing internationally recognized standards for the protection of wounded combatants and military medical facilities". In 1906, to put an end to the argument of the Ottoman Empire that the flag took its roots from Christianity, it was decided officially to promote the idea that the Red Cross flag had been formed by reversing the federal colours of Switzerland, although no written evidence of this origin had ever been found. The 1899 convention signed at the Hague extended the use of the Red Cross flag to naval ensigns, requiring that "all hospital ships shall make themselves known by hoisting, together with their national flag, the white flag with a red cross provided by the Geneva Convention". The Red Crescent emblem was first used by ICRC volunteers during the armed conflict of 1876–1878 between the Ottoman Empire and the Russian Empire. The symbol was officially adopted in 1929, and so far 33 states in the Muslim world have recognized it. In common with the official promotion of the red cross symbol as a colour-reversal of the Swiss flag (rather than a religious symbol), the red crescent is similarly presented as being derived from a colour-reversal of the flag of the Ottoman Empire. The International Committee of the Red Cross (ICRC) was concerned with the possibility that the two previous symbols (Red Cross and Red Crescent) were conveying religious meanings which would not be compatible with, for example, a majority Hindu or Buddhist country from the Asia-Pacific region, where the majority did not associate with these symbols. Therefore, in 1992, the then-president Cornelio Sommaruga decided that a third, more neutral symbol was required. On 8 December 2005, in response to growing pressure to accommodate Magen David Adom (MDA), Israel's national emergency medical, disaster, ambulance, and blood bank service, as a full member of the Red Cross and Red Crescent movement, a new emblem (officially the Third Protocol Emblem, but more commonly known as the Red Crystal) was adopted by an amendment of the Geneva Conventions known as Protocol III, fulfilling Sommaruga's suggestion. The Crystal can be found on official buildings and occasionally in the field. This symbolises equality and has no political, religious, or geographical connotations, thus allowing any country not comfortable with the symbolism of the original two flags to join the movement. The Red Lion and Sun Society of Iran was established in 1922 and admitted to the Red Cross and Red Crescent movement in 1923. The symbol was introduced at Geneva in 1864, as a counter example to the crescent and cross used by two of Iran's rivals, the Ottoman and the Russian empires. Although that claim is inconsistent with the Red Crescent's history, that history also suggests that the Red Lion and Sun, like the Red Crescent, may have been conceived during the 1877–1878 war between Russia and Turkey. Due to the emblem's association with the Iranian monarchy, the Islamic Republic of Iran replaced the Red Lion and Sun with the Red Crescent in 1980, consistent with two existing Red Cross and Red Crescent symbols. Although the Red Lion and Sun has now fallen into disuse, Iran has in the past reserved the right to take it up again at any time; the Geneva Conventions continue to recognize it as an official emblem, and that status was confirmed by Protocol III in 2005 even as it added the Red Crystal. For over 50 years, Israel requested the addition of a red Star of David, arguing that since Christian and Muslim emblems were recognized, the corresponding Jewish emblem should be as well. This emblem has been used by Magen David Adom (MDA), or Red Star of David, but it is not recognized by the Geneva Conventions as a protected symbol. The Red Star of David is not recognized as a protected symbol outside Israel; instead the MDA uses the Red Crystal emblem during international operations in order to ensure protection. Depending on the circumstances, it may place the Red Star of David inside the Red Crystal, or use the Red Crystal alone. In her March 2000 letter to the International Herald Tribune and the New York Times, Bernadine Healy, then president of the American Red Cross, wrote: "The international committee's feared proliferation of symbols is a pitiful fig leaf, used for decades as the reason for excluding the Magen David Adom—the Shield (or Star) of David." In protest, the American Red Cross withheld millions in administrative funding to the International Federation of Red Cross and Red Crescent Societies since May 2000. In 1922, a Red Swastika Society was formed in China during the Warlord era. The swastika is used in the Indian subcontinent, East, and Southeast Asia as a symbol to represent Dharma or Hinduism, Buddhism, and Jainism in general. While the organization has organized philanthropic relief projects (both domestic and international), as a sectarian religious body it is ineligible for recognition from the International Committee. The Australian TV network ABC and the indigenous rights group Rettet die Naturvölker released a documentary called Blood on the Cross in 1999. It alleged the involvement of the Red Cross with the British and Indonesian military in a massacre in the Southern Highlands of Western New Guinea during the World Wildlife Fund's Mapenduma hostage crisis of May 1996, when Western and Indonesian activists were held hostage by separatists. Following the broadcast of the documentary, the Red Cross announced publicly that it would appoint an individual outside the organization to investigate the allegations made in the film and any responsibility on its part. Piotr Obuchowicz was appointed to investigate the matter. The report categorically states that the Red Cross personnel accused of involvement were proven not to have been present; that a white helicopter was probably used in a military operation, but the helicopter was not a Red Cross helicopter, and must have been painted by one of several military organizations operating in the region at the time. Perhaps the Red Cross logo itself was also used, although no hard evidence was found for this; that this was part of the military operation to free the hostages, but was clearly intended to achieve surprise by deceiving the local people into thinking that a Red Cross helicopter was landing; and that the Red Cross should have responded more quickly and thoroughly to investigate the allegations than it did. 46°13′40″N 6°8′14″E / 46.22778°N 6.13722°E / 46.22778; 6.13722
[ { "paragraph_id": 0, "text": "The organized International Red Cross and Red Crescent Movement is a humanitarian movement with approximately 16 million volunteers, members, and staff worldwide. It was founded to protect human life and health, to ensure respect for all human beings, and to prevent and alleviate human suffering. Within it there are three distinct organisations that are legally independent from each other, but are united within the movement through common basic principles, objectives, symbols, statutes, and governing organisations.", "title": "" }, { "paragraph_id": 1, "text": "Until the middle of the nineteenth century, there were no organized or well-established army nursing systems for casualties, nor safe or protected institutions, to accommodate and treat those who were wounded on the battlefield. A devout Calvinist, the Swiss businessman Jean-Henri Dunant, traveled to Italy to meet then-French emperor Napoleon III in June 1859 with the intention of discussing difficulties in conducting business in Algeria, which at that time was occupied by France. He arrived in the small town of Solferino on the evening of 24 June after the Battle of Solferino, an engagement in the Austro-Sardinian War. In a single day, about 40,000 soldiers on both sides died or were left wounded on the field. Dunant was shocked by the terrible aftermath of the battle, the suffering of the wounded soldiers, and the near-total lack of medical attendance and basic care. He completely abandoned the original intent of his trip and for several days he devoted himself to helping with the treatment and care for the wounded. He took point in organizing an overwhelming level of relief assistance with the local villagers to aid without discrimination.", "title": "History" }, { "paragraph_id": 2, "text": "Back at his home in Geneva, he decided to write a book entitled A Memory of Solferino, which he published using his own money in 1862. He sent copies of the book to leading political and military figures throughout Europe, and people he thought could help him make a change. His book included vivid descriptions of his experiences in Solferino in 1859, and he explicitly advocated the formation of national voluntary relief organizations to help nurse wounded soldiers in the case of war, inspired by Christian teaching regarding social responsibility and his experience after the battlefield of Solferino. He called for the development of an international treaty to guarantee the protection of medics and field hospitals for soldiers wounded on the battlefield.", "title": "History" }, { "paragraph_id": 3, "text": "In 1863, Gustave Moynier, a Geneva lawyer and president of the Geneva Society for Public Welfare, received a copy of Dunant's book and introduced it for discussion at a meeting of that society. As a result of this initial discussion, the society established an investigatory commission to examine the feasibility of Dunant's suggestions and eventually to organize an international conference about their possible implementation. The members of this committee, which has subsequently been referred to as the \"Committee of the Five\", aside from Dunant and Moynier were physician Louis Appia, who had significant experience working as a field surgeon; Appia's friend and colleague Théodore Maunoir, from the Geneva Hygiene and Health Commission; and Guillaume-Henri Dufour, a Swiss army general of great renown. Eight days later, the five men decided to rename the committee to the \"International Committee for Relief to the Wounded\".", "title": "History" }, { "paragraph_id": 4, "text": "From 26 to 29 October 1863, the international conference organized by the committee was held in Geneva to develop possible measures to improve medical services on the battlefield. The conference was attended by 36 individuals: eighteen official delegates from national governments, six delegates from non-governmental organizations, seven non-official foreign delegates, and the five members of the International Committee. The states and kingdoms represented by official delegates were: Austrian Empire, Grand Duchy of Baden, Kingdom of Bavaria, French Empire, Kingdom of Hanover, Grand Duchy of Hesse, Kingdom of Italy, Kingdom of the Netherlands, Kingdom of Prussia, Russian Empire, Kingdom of Saxony, Kingdom of Spain, United Kingdoms of Sweden and Norway, and United Kingdom of Great Britain and Ireland.", "title": "History" }, { "paragraph_id": 5, "text": "Among the proposals written in the final resolutions of the conference, adopted on 29 October 1863, were:", "title": "History" }, { "paragraph_id": 6, "text": "Only a year later, the Swiss government invited the governments of all European countries, as well as the United States, the Empire of Brazil and the Mexican Empire to attend an official diplomatic conference. Sixteen countries sent a total of 26 delegates to Geneva. On 22 August 1864, the conference adopted the first Geneva Convention \"for the Amelioration of the Condition of the Wounded in Armies in the Field\". Representatives of 12 states and kingdoms signed the convention:", "title": "History" }, { "paragraph_id": 7, "text": "The convention contained ten articles, establishing for the first time legally binding rules guaranteeing neutrality and protection for wounded soldiers, field medical personnel, and specific humanitarian institutions in an armed conflict.", "title": "History" }, { "paragraph_id": 8, "text": "Directly following the establishment of the Geneva Convention, the first national societies were founded in Belgium, Denmark, France, Oldenburg, Prussia, Spain, and Württemberg. Also in 1864, Louis Appia and Charles van de Velde, a captain of the Dutch Army, became the first independent and neutral delegates to work under the symbol of the Red Cross in an armed conflict.", "title": "History" }, { "paragraph_id": 9, "text": "The Ottoman government ratified this treaty on 5 July 1865. The Turkish Red Crescent organization was founded in the Ottoman Empire in 1868, partly in response to the experience of the Crimean War (1853–1856), in which disease overshadowed battle as the main cause of death and suffering among Turkish soldiers. It was the first Red Crescent society of its kind and one of the most important charity organizations in the Muslim world.", "title": "History" }, { "paragraph_id": 10, "text": "In 1867, the first International Conference of National Aid Societies for the Nursing of the War Wounded was convened. Also in 1867, Jean-Henri Dunant was forced to declare bankruptcy due to business failures in Algeria, partly because he had neglected his business interests during his tireless activities for the International Committee. The controversy surrounding Dunant's business dealings and the resulting negative public opinion, combined with an ongoing conflict with Gustave Moynier, led to Dunant's expulsion from his position as a member and secretary. He was charged with fraudulent bankruptcy and a warrant for his arrest was issued. Thus, he was forced to leave Geneva and never returned to his home city.", "title": "History" }, { "paragraph_id": 11, "text": "In the following years, national societies were founded in nearly every country in Europe. The project resonated well with patriotic sentiments that were on the rise in the late-nineteenth-century, and national societies were often encouraged as signifiers of national moral superiority. In 1876, the committee adopted the name \"International Committee of the Red Cross\" (ICRC), which is still its official designation today. Five years later, the American Red Cross was founded through the efforts of Clara Barton. More and more countries signed the Geneva Convention and began to respect it in practice during armed conflicts. In a rather short period of time, the Red Cross gained huge momentum as an internationally respected movement, and the national societies became increasingly popular as a venue for volunteer work.", "title": "History" }, { "paragraph_id": 12, "text": "When the first Nobel Peace Prize was awarded in 1901, the Norwegian Nobel Committee opted to give it jointly to Jean-Henri Dunant and Frédéric Passy, a leading international pacifist. More significant than the honor of the prize itself, this prize marked the overdue rehabilitation of Jean-Henri Dunant and represented a tribute to his key role in the formation of the Red Cross. Dunant died nine years later in the small Swiss health resort of Heiden. Only two months earlier his long-standing adversary Gustave Moynier had also died, leaving a mark in the history of the committee as its longest-serving president ever.", "title": "History" }, { "paragraph_id": 13, "text": "In 1906, the 1864 Geneva Convention was revised for the first time. One year later, the Hague Convention X, adopted at the Second International Peace Conference in The Hague, extended the scope of the Geneva Convention to naval warfare. Shortly before the beginning of the First World War in 1914, 50 years after the foundation of the ICRC and the adoption of the first Geneva Convention, there were already 45 national relief societies throughout the world. The movement had extended itself beyond Europe and North America to Central and South America (Argentine Republic, the United States of Brazil, the Republic of Chile, the Republic of Cuba, the United Mexican States, the Republic of Peru, the Republic of El Salvador, the Oriental Republic of Uruguay, the United States of Venezuela), Asia (the Republic of China, the Empire of Japan and the Kingdom of Siam), and Africa (Union of South Africa).", "title": "History" }, { "paragraph_id": 14, "text": "With the outbreak of World War I, the ICRC found itself confronted with enormous challenges that it could handle only by working closely with the national Red Cross societies. Red Cross nurses from around the world, including the United States and Japan, came to support the medical services of the armed forces of the European countries involved in the war. On 15 August 1914, immediately after the start of the war, the ICRC set up its International Prisoners-of-War Agency (IPWA) to trace POWs and to re-establish communications with their respective families. The Austrian writer and pacifist Stefan Zweig described the situation at the Geneva headquarters of the ICRC:", "title": "History" }, { "paragraph_id": 15, "text": "Hardly had the first blows been struck when cries of anguish from all lands began to be heard in Switzerland. Thousands who were without news of fathers, husbands, and sons in the battlefields, stretched despairing arms into the void. By hundreds, by thousands, by tens of thousands, letters and telegrams poured into the little House of the Red Cross in Geneva, the only international rallying point that still remained. Isolated, like stormy petrels, came the first inquiries for missing relatives; then these inquiries themselves became a storm. The letters arrived in sackfuls. Nothing had been prepared for dealing with such an inundation of misery. The Red Cross had no space, no organization, no system, and above all no helpers.", "title": "History" }, { "paragraph_id": 16, "text": "However, by the end of the year, the Agency already had some 1,200 volunteers who worked in the Musée Rath of Geneva, amongst them the French writer and pacifist Romain Rolland. When he was awarded the Nobel Prize for Literature for 1915, he donated half of the prize money to the Agency. Most of the staff were women, some of whom – like Marguerite van Berchem, Marguerite Cramer and Suzanne Ferrière – served in high positions as pioneers of gender equality in an organisation dominated by men.", "title": "History" }, { "paragraph_id": 17, "text": "By the end of the war, the Agency had transferred about 20 million letters and messages, 1.9 million parcels, and about 18 million Swiss francs in monetary donations to POWs of all affected countries. Furthermore, due to the intervention of the Agency, about 200,000 prisoners were exchanged between the warring parties, released from captivity and returned to their home country. The organizational card index of the Agency accumulated about 7 million records from 1914 to 1923. The card index led to the identification of about 2 million POWs and the ability to contact their families. The complete index is on loan today from the ICRC to the International Red Cross and Red Crescent Museum in Geneva. The right to access the index is still strictly restricted to the ICRC.", "title": "History" }, { "paragraph_id": 18, "text": "During the entire war, the ICRC monitored warring parties' compliance with the Geneva Conventions of the 1907 revision and forwarded complaints about violations to the respective country. When chemical weapons were used in this war for the first time in history, the ICRC mounted a vigorous protest against their use. Even without having a mandate from the Geneva Conventions, the ICRC tried to ameliorate the suffering of civil populations. In territories that were officially designated as \"occupied territories\", the ICRC could assist the civilian population on the basis of the Hague Convention's \"Laws and Customs of War on Land\" of 1907. This convention was also the legal basis for the ICRC's work for prisoners of war. In addition to the work of the International Prisoner-of-War Agency as described above, this included inspection visits to POW camps. A total of 524 camps throughout Europe were visited by 41 delegates from the ICRC through the end of the war.", "title": "History" }, { "paragraph_id": 19, "text": "Between 1916 and 1918, the ICRC published a number of postcards with scenes from the POW camps. The pictures showed the prisoners in day-to-day activities such as the distribution of letters from home. The intention of the ICRC was to provide the families of the prisoners with some hope and solace and to alleviate their uncertainties about the fate of their loved ones. After the end of the war, between 1920 and 1922, the ICRC organized the return of about 500,000 prisoners to their home countries. In 1920, the task of repatriation was handed over to the newly founded League of Nations, which appointed the Norwegian diplomat and scientist Fridtjof Nansen as its \"High Commissioner for Repatriation of the War Prisoners\". His legal mandate was later extended to support and care for war refugees and displaced persons when his office became that of the League of Nations \"High Commissioner for Refugees\". Nansen, who invented the Nansen passport for stateless refugees and was awarded the Nobel Peace Prize in 1922, appointed two delegates from the ICRC as his deputies. A year before the end of the war, the ICRC received the 1917 Nobel Peace Prize for its outstanding wartime work. It was the only Nobel Peace Prize awarded in the period from 1914 to 1918. In 1923, the International Committee of the Red Cross adopted a change in its policy regarding the selection of new members. Until then, only citizens from the city of Geneva could serve in the committee. This limitation was expanded to include all Swiss citizens. As a direct consequence of World War I, a treaty was adopted in 1925 which outlawed the use of suffocating or poisonous gases and biological agents as weapons. Four years later, the original Convention was revised and the second Geneva Convention \"relative to the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea\" was established. The events of World War I and the respective activities of the ICRC significantly increased the reputation and authority of the Committee among the international community and led to an extension of its competencies.", "title": "History" }, { "paragraph_id": 20, "text": "As early as in 1934, a draft proposal for an additional convention for the protection of the civil population in occupied territories during an armed conflict was adopted by the International Red Cross Conference. Unfortunately, most governments had little interest in implementing this convention, and it was thus prevented from entering into force before the beginning of World War II.", "title": "History" }, { "paragraph_id": 21, "text": "The Red Cross' response to the Holocaust has been the subject of significant controversy and criticism. As early as May 1944, the ICRC was criticized for its indifference to Jewish suffering and death—criticism that intensified after the end of the war, when the full extent of the Holocaust became undeniable.", "title": "History" }, { "paragraph_id": 22, "text": "One defense to these allegations is that the Red Cross was trying to preserve its reputation as a neutral and impartial organization by not interfering with what was viewed as a German internal matter. The Red Cross also considered its primary focus to be prisoners of war whose countries had signed the Geneva Convention.", "title": "History" }, { "paragraph_id": 23, "text": "The Geneva Conventions in their 1929 revision formed the legal basis of the work of the ICRC during World War II. The activities of the committee were similar to those during World War I: visiting and monitoring POW camps, organizing relief assistance for civilian populations, and administering the exchange of messages regarding prisoners and missing persons. By the end of the war, 179 delegates had conducted 12,750 visits to POW camps in 41 countries. The Central Information Agency on Prisoners-of-War (Agence centrale des prisonniers de guerre) had a staff of 3,000, the card index tracking prisoners contained 45 million cards, and 120 million messages were exchanged by the Agency. One major obstacle was that the Nazi-controlled German Red Cross refused to cooperate with the Geneva statutes, including blatant violations such as the deportation of Jews from Germany, and the mass murders conducted in the Nazi concentration camps.", "title": "History" }, { "paragraph_id": 24, "text": "Two other main parties to the conflict, the Soviet Union and Japan, were not party to the 1929 Geneva Conventions and were not legally required to follow the rules of the conventions.", "title": "History" }, { "paragraph_id": 25, "text": "During the war, the ICRC was unable to obtain an agreement with Nazi Germany about the treatment of detainees in concentration camps, and the ICRC eventually abandoned applying pressure, saying later it did so in order to avoid disrupting its work with POWs. The ICRC was also unable to obtain a response to reliable information about the extermination camps and the mass killing of European Jews, Roma, et al. After November 1943, the ICRC achieved permission to send parcels to concentration camp detainees with known names and locations. Because the notices of receipt for these parcels were often signed by other inmates, the ICRC managed to register the identities of about 105,000 detainees in the concentration camps and delivered about 1.1 million parcels, primarily to the concentration camps Dachau, Buchenwald, Ravensbrück, and Sachsenhausen.", "title": "History" }, { "paragraph_id": 26, "text": "Maurice Rossel was sent to Berlin as a delegate of the International Red Cross; he visited Theresienstadt Ghetto in 1944. The choice of the inexperienced Rossel for this mission has been interpreted as indicative of his organization's indifference to the \"Jewish problem\", while his report has been described as \"emblematic of the failure of the ICRC\" to advocate for Jews during the Holocaust. Rossel's report was noted for its uncritical acceptance of Nazi propaganda. He erroneously stated that Jews were not deported from Theresienstadt. Claude Lanzmann recorded his experiences in 1979, producing a documentary entitled A Visitor from the Living.", "title": "History" }, { "paragraph_id": 27, "text": "On 12 March 1945, ICRC president Jacob Burckhardt received a message from SS General Ernst Kaltenbrunner allowing ICRC delegates to visit the concentration camps. This agreement was bound by the condition that these delegates would have to stay in the camps until the end of the war. Ten delegates, among them Louis Haefliger (Mauthausen-Gusen), Paul Dunant (Theresienstadt), and Victor Maurer (Dachau) accepted the assignment and visited the camps. Louis Haefliger prevented the forceful eviction or blasting of Mauthausen-Gusen by alerting American troops.", "title": "History" }, { "paragraph_id": 28, "text": "Friedrich Born (1903–1963), an ICRC delegate in Budapest who saved the lives of about 11,000 to 15,000 Jewish people in Hungary. Marcel Junod (1904–1961), a physician from Geneva was one of the first foreigners to visit Hiroshima after the atomic bomb was dropped.", "title": "History" }, { "paragraph_id": 29, "text": "In 1944, the ICRC received its second Nobel Peace Prize. As in World War I, it received the only Peace Prize awarded during the main period of war, 1939 to 1945. At the end of the war, the ICRC worked with national Red Cross societies to organize relief assistance to those countries most severely affected. In 1948, the Committee published a report reviewing its war-era activities from 1 September 1939 to 30 June 1947. The ICRC opened its archives from World War II in 1996.", "title": "History" }, { "paragraph_id": 30, "text": "On 12 August 1949, further revisions to the existing two Geneva Conventions were adopted. An additional convention \"for the Amelioration of the Condition of Wounded, Sick and Shipwrecked Members of Armed Forces at Sea\", now called the second Geneva Convention, was brought under the Geneva Convention umbrella as a successor to the 1907 Hague Convention X. The 1929 Geneva convention \"relative to the Treatment of Prisoners of War\" may have been the second Geneva Convention from a historical point of view (because it was actually formulated in Geneva), but after 1949 it came to be called the third Convention because it came later chronologically than the Hague Convention. Reacting to the experience of World War II, the Fourth Geneva Convention, a new Convention \"relative to the Protection of Civilian Persons in Time of War\", was established. Also, the additional protocols of 8 June 1977 were intended to make the conventions apply to internal conflicts such as civil wars. Today, the four conventions and their added protocols contain more than 600 articles, while there were only 10 articles in the first 1864 convention.", "title": "History" }, { "paragraph_id": 31, "text": "In celebration of its centennial in 1963, the ICRC, together with the League of Red Cross Societies, received its third Nobel Peace Prize.", "title": "History" }, { "paragraph_id": 32, "text": "On 16 October 1990, the UN General Assembly granted the ICRC observer status for its assembly sessions and sub-committee meetings, the first observer status given to a private organization. The resolution was jointly proposed by 138 member states and introduced by the Italian ambassador, in memory of the organization's origins in the Battle of Solferino. An agreement with the Swiss government signed on 19 March 1993 affirmed the already long-standing policy of full independence of the committee from any interference by Switzerland. The agreement protects the full sanctity of all ICRC property in Switzerland including its headquarters and archive, grants members and staff legal immunity, exempts the ICRC from all taxes and fees, guarantees the protected and duty-free transfer of goods, services, and money, provides the ICRC with secure communication privileges at the same level as foreign embassies, and simplifies Committee travel in and out of Switzerland.", "title": "History" }, { "paragraph_id": 33, "text": "At the end of the Cold War, the ICRC's work became more dangerous. In the 1990s, more delegates died than at any point in its history, especially when working in local and internal armed conflicts. These incidents often demonstrated a lack of respect for the rules of the Geneva Conventions and their protection symbols. Among the slain delegates were:", "title": "History" }, { "paragraph_id": 34, "text": "On 27 January 2002, Palestinian Red Crescent volunteer paramedic and suicide bomber Wafa Idris was transported to Jerusalem, Israel, by a Red Crescent ambulance, whose driver was part of the plot, and killed herself while committing the Jaffa Street bombing. Idris, wearing a Red Crescent uniform, detonated a 22-pound (10 kilogram) bomb made up of TNT packed into pipes, in the center of Jerusalem outside a shoe store on the busy main shopping street Jaffa Road. The explosion she caused killed her and Pinhas Tokatli (81), and injured more than 100 others.", "title": "History" }, { "paragraph_id": 35, "text": "In the 2000s, the ICRC has been active in the Afghanistan conflict areas and has set up six physical rehabilitation centers to help land mine victims. Their support extends to the national and international armed forces, civilians and the armed opposition. They regularly visit detainees under the custody of the Afghan government and the international armed forces, but have also occasionally had access since 2009 to people detained by the Taliban. They have provided basic first aid training and aid kits to both the Afghan security forces and Taliban members because, according to an ICRC spokesperson, \"ICRC's constitution stipulates that all parties harmed by warfare will be treated as fairly as possible\". In August 2021, when NATO-led forces retreated from Afghanistan, the ICRC decided to remain in the country to continue its mission to assist and protect victims of conflict. Since June 2021, ICRC-supported facilities have treated more than 40,000 people wounded during armed confrontations there.", "title": "History" }, { "paragraph_id": 36, "text": "Among the ten largest ICRC deployments worldwide has been the mission in Ukraine, where the organisation has been active since 2014, working closely with the Ukrainian Red Cross Society. At first, the ICRC was active primarily in the disputed regions of Donbas and Donetsk, assisting persons injured by armed confrontations there. When Russia invaded Ukraine on 24 February 2022, the fighting moved to more populated areas in East, North, and South Ukraine. The head of the ICRC delegation in Kyiv warned on 26 February 2022 that neighborhoods of major cities were becoming the frontline with significant consequences for their populations, including children, the sick, and elderly. The ICRC urgently called on all parties to the conflict not to forget their obligations under international humanitarian law to ensure the protection of the civilian population and infrastructure, and respect the dignity of refugees and prisoners of war.", "title": "History" }, { "paragraph_id": 37, "text": "Prior to the war that broke out in 2023, Israeli authorities required Palestinian ambulances to undergo thorough searches when passing through checkpoints, saying the policy was driven by Palestinian organizations using ambulances to transport terrorists and armaments. The Israeli Ministry of Health said that: \"The Red Crescent closely cooperated with the MDA (Magen David Adom) until April 2002. At that time, the IDF claimed that Red Crescent ambulances were being used to carry terrorists. The Red Crescent personnel involved in this violation were interrogated.\"", "title": "History" }, { "paragraph_id": 38, "text": "In October 2023, the ICRC responded to the 2023 Israel–Hamas war that has resulted in the deaths of thousands of civilians on both sides. The ICRC has called the violence \"abhorrent\" and implored both sides to reduce the suffering of civilians. The ICRC was in constant contact with Hamas and Israeli officials to avoid further carnage.", "title": "History" }, { "paragraph_id": 39, "text": "The 2023 conflict began on 7 October 2023 with the killing, rapes, and kidnapping of over 1,400 Israelis and others by Hamas in the 2023 Hamas-led attack on Israel.", "title": "History" }, { "paragraph_id": 40, "text": "Fabrizio Carboni, regional director of the ICRC and IFRC for the Near and Middle East, stressed that the taking of hostages is prohibited under international humanitarian law. The ICRC as a neutral intermediary stood ready to conduct humanitarian visits and to facilitate communications between family members and hostages with the goal for their eventual release. At the same time he spoke to the impact of the war on residents of Gaza, who were cut off from all food shipments, electricity and medical supplies, which particularly affected the functioning of local hospitals there. Inhabitants of Gaza already endured a problematic lack of drinking water before the onset of the hostilities.", "title": "History" }, { "paragraph_id": 41, "text": "Nitsana Darshan-Leitner, attorney, human rights activist, and the founder of Shurat HaDin, wrote in a letter to the ICRC on November 20, 2023, that by allowing Hamas to use its ambulances to transport terrorists, the Red Crescent \"plays an integral part in Hamas’s illegal conduct,\" and suggested that the ICRC suspend the Red Crescent from the organization. She said that Hamas had used Red Crescent ambulances for two decades, and used ambulances to transport rockets and Hamas members, sometimes disguising them as wounded.", "title": "History" }, { "paragraph_id": 42, "text": "The monitoring group UN Watch wrote in a report on November 11, 2023, that the ICRC had adopted an overwhelmingly skewed approach to the Hamas-Israel war in its social media. It said that out of 187 tweets published by the main ICRC accounts on Twitter, 77% were focused on criticizing Israel, expressly or by implication, while only 7% of the ICRC tweets criticized Hamas. It also pointed out that as an example the ICRC promoted a false Hamas story that Israel attacked and \"destroyed\" Al-Ahli Arab Hospital, saying it was “shocked and horrified” that “hundreds were killed,” including “patients “killed in a hospital bed,” and doctors “losing their lives trying to save others\" from the Israeli attack, which was all completely untrue – and then ICRC never corrected its misinformation. It also said that the ICRC repeatedly promoted the notion that Israel was responsible for the war which was in fact launched by Hamas, with its invasion and massacre of October 7, and failed to condemn Hamas’s aggression, while glossing over 11,000 rockets launched by Hamas in two months, and the displacement of 300,000 Israelis.", "title": "History" }, { "paragraph_id": 43, "text": "Eli Beer, the founder of the emergency medical services organization United Hatzalah, wrote on December 12, 2023, that the ICRC had \"neither the bandwidth nor the fortitude to gain access to captives taken during the massacre, or even to condemn Hamas for the endless list of war crimes it committed on October 7th.\" He said that it was \"refusing to take decisive action to protect Israelis .. [and that it did] not provide [the hostages] with the critical medical treatment they are entitled to as prisoners of war.\" He also said that it \"has a clear anti-Israel bent and blatantly discriminates against Jews... [and that it] did nothing for [the hostages'] release, other than picking up released hostages and driving them from point A to point B, acting as nothing more than an Uber-like transportation service for hostages.\"", "title": "History" }, { "paragraph_id": 44, "text": "The ICRC working closely with Red Crescent partners has a neutral, independent and exclusively humanitarian mandate during such escalations of violence in the Middle East and urged all parties to protect the lives of civilians, to reduce their suffering and protect their dignity. During the violent conflict, the ICRC and the Palestinian Red Crescent Society (PRCS) provided hospitals in the Gaza strip with support through large humanitarian convoys from Egypt, and was seriously affected by numerous aerial attacks on medical facilities and ambulances. The ICRC said in November that civilians have \"overwhelmingly borne the brunt\" of the fighting in the Palestinian enclave and Israel so far. Israeli forces have killed over 15,200 people, including Hamas members, in a devastating bombing campaign and ground offensive.", "title": "History" }, { "paragraph_id": 45, "text": "In late November, the team of the ICRC started a multi-day operation to facilitate the release and transfer of hostages held in Gaza and of Palestinian prisoners to the West Bank. In early December, US Secretary of State Antony Blinken insisted that the Red Cross delegation must have access to the remaining hostages. The ICRC is not a negotiating power but the ICRC chief had direct talks with senior Hamas leader Ismail Haniyeh in Qatar in November, demanding direct access to the remaining hostages.", "title": "History" }, { "paragraph_id": 46, "text": "In 1919, representatives from the national Red Cross societies of Britain, France, Italy, Japan, and the US came together in Paris to found the \"League of Red Cross Societies\" (IFRC). The original idea came from Henry Davison, who was then president of the American Red Cross. This move, led by the American Red Cross, expanded the international activities of the Red Cross movement beyond the strict mission of the ICRC to include relief assistance in response to emergency situations which were not caused by war (such as man-made or natural disasters). The ARC already had great disaster relief mission experience extending back to its foundation.", "title": "History" }, { "paragraph_id": 47, "text": "The formation of the League, as an additional international Red Cross organization alongside the ICRC, was not without controversy for a number of reasons. The ICRC had, to some extent, valid concerns about a possible rivalry between the two organizations. The foundation of the League was seen as an attempt to undermine the leadership position of the ICRC within the movement and to gradually transfer most of its tasks and competencies to a multilateral institution. In addition to that, all founding members of the League were national societies from countries of the Entente or from associated partners of the Entente. The original statutes of the League from May 1919 contained further regulations which gave the five founding societies a privileged status and, due to the efforts of Henry Davison, the right to permanently exclude the national Red Cross societies from the countries of the Central Powers, namely Germany, Austria, Hungary, Bulgaria and Turkey, and in addition to that the national Red Cross society of Russia. These rules were contrary to the Red Cross principles of universality and equality among all national societies, a situation which furthered the concerns of the ICRC.", "title": "History" }, { "paragraph_id": 48, "text": "The first relief assistance mission organized by the League was an aid mission for the victims of a famine and subsequent typhus epidemic in Poland. Only five years after its foundation, the League had already issued 47 donation appeals for missions in 34 countries, an impressive indication of the need for this type of Red Cross work. The total sum raised by these appeals reached 685 million Swiss francs, which were used to bring emergency supplies to the victims of famines in Russia, Germany, and Albania; earthquakes in Chile, Persia, Japan, Colombia, Ecuador, Costa Rica, and Turkey; and refugee flows in Greece and Turkey. The first large-scale disaster mission of the League came after the 1923 earthquake in Japan which killed about 200,000 people and left countless more wounded and without shelter. Due to the League's coordination, the Red Cross society of Japan received goods from its sister societies reaching a total worth of about $100 million. Another important new field initiated by the League was the creation of youth Red Cross organizations within the national societies.", "title": "History" }, { "paragraph_id": 49, "text": "A joint mission of the ICRC and the League in the Russian Civil War from 1917 to 1922 marked the first time the movement was involved in an internal conflict, although still without an explicit mandate from the Geneva Conventions. The League, with support from more than 25 national societies, organized assistance missions and the distribution of food and other aid goods for civil populations affected by hunger and disease. The ICRC worked with the Russian Red Cross Society and later the society of the Soviet Union, constantly emphasizing the ICRC's neutrality. In 1928, the \"International Council\" was founded to coordinate cooperation between the ICRC and the League, a task which was later taken over by the \"Standing Commission\". In the same year, a common statute for the movement was adopted for the first time, defining the respective roles of the ICRC and the League within the movement.", "title": "History" }, { "paragraph_id": 50, "text": "During the Abyssinian war between Ethiopia and Italy from 1935 to 1936, the League contributed aid supplies worth about 1.7 million Swiss francs. Because the Italian fascist regime under Benito Mussolini refused any cooperation with the Red Cross, these goods were delivered solely to Ethiopia. During the war, an estimated 29 people died while being under explicit protection of the Red Cross symbol, most of them due to attacks by the Italian Army. During the civil war in Spain from 1936 to 1939 the League once again joined forces with the ICRC with the support of 41 national societies. In 1939 on the brink of the Second World War, the League relocated its headquarters from Paris to Geneva to take advantage of Swiss neutrality.", "title": "History" }, { "paragraph_id": 51, "text": "In 1952, the 1928 common statute of the movement was revised for the first time. Also, the period of decolonization from 1960 to 1970 was marked by a huge jump in the number of recognized national Red Cross and Red Crescent societies. By the end of the 1960s, there were more than 100 societies around the world. On 10 December 1963, the Federation and the ICRC received the Nobel Peace Prize. In 1983, the League was renamed to the \"League of Red Cross and Red Crescent Societies\" to reflect the growing number of national societies operating under the Red Crescent symbol. Three years later, the seven basic principles of the movement as adopted in 1965 were incorporated into its statutes. The name of the League was changed again in 1991 to its current official designation the \"International Federation of Red Cross and Red Crescent Societies\". In 1997, the ICRC and the IFRC signed the Seville Agreement which further defined the responsibilities of both organizations within the movement. In 2004, the IFRC began its largest mission to date after the tsunami disaster in South Asia. More than 40 national societies have worked with more than 22,000 volunteers to bring relief to the countless victims left without food and shelter and endangered by the risk of epidemics.", "title": "History" }, { "paragraph_id": 52, "text": "Altogether, there are about 80 million people worldwide who serve with the ICRC, the International Federation, and the National Societies, the majority with the latter.", "title": "Activities" }, { "paragraph_id": 53, "text": "At the 20th International Conference in Neue Hofburg, Vienna, from 2–9 October 1965, delegates \"proclaimed\" seven fundamental principles which are shared by all components of the Movement, and they were added to the official statutes of the Movement in 1986. The durability and universal acceptance is a result of the process through which they came into being in the form they have. Rather than an effort to arrive at agreement, it was an attempt to discover what successful operations and organisational units, over the past 100 years, had in common. As a result, the Fundamental Principles of the Red Cross and Red Crescent were not revealed, but found – through a deliberate and participative process of discovery.", "title": "Activities" }, { "paragraph_id": 54, "text": "That makes it even more important to note that the definition that appears for each principle is an integral part of the Principle in question and not an interpretation that can vary with time and place.", "title": "Activities" }, { "paragraph_id": 55, "text": "", "title": "Activities" }, { "paragraph_id": 56, "text": "The International Conference of the Red Cross and Red Crescent, which occurs once every four years, is the highest institutional body of the Movement. It gathers delegations from all of the national societies as well as from the ICRC, the IFRC and the signatory states to the Geneva Conventions. In between the conferences, the Standing Commission of the Red Cross and Red Crescent acts as the supreme body and supervises implementation of and compliance with the resolutions of the conference. In addition, the Standing Commission coordinates the cooperation between the ICRC and the IFRC. It consists of two representatives from the ICRC (including its president), two from the IFRC (including its president), and five individuals who are elected by the International Conference. The Standing Commission convenes every six months on average. Moreover, a convention of the Council of Delegates of the Movement takes place every two years in the course of the conferences of the General Assembly of the International Federation. The Council of Delegates plans and coordinates joint activities for the Movement.", "title": "Activities" }, { "paragraph_id": 57, "text": "The official mission of the ICRC as an impartial, neutral, and independent organization is to stand for the protection of the life and dignity of victims of international and internal armed conflicts. According to the revised Seville Agreement of 2022, the ICRC is entrusted the role of \"co-convener\" with the national Red Cross or Crescent society in situations of international and non-international armed conflicts, internal strife and their direct results.", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 58, "text": "The core tasks of the committee, which are derived from the Geneva Conventions and its own statutes, are the following:", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 59, "text": "The ICRC is headquartered in the Swiss city of Geneva and has offices in over 100 countries. It has more than 22,000 staff members worldwide, about 1,400 of them working in its Geneva headquarters, 3,250 expatriate staff serving as general delegates and technical specialists, and about 17,000 locally recruited staff.", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 60, "text": "According to Swiss law, the ICRC is defined as a private association. Contrary to popular belief, the ICRC is not a non-governmental organization in the most common sense of the term, nor is it an international organization. As it limits its members (a process called cooptation) to Swiss nationals only, it does not have a policy of open and unrestricted membership for individuals like other legally defined NGOs. The word \"international\" in its name does not refer to its membership but to the worldwide scope of its activities as defined by the Geneva Conventions. The ICRC has special privileges and legal immunities in many countries, based on national law in these countries or through agreements between the committee and respective national governments.", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 61, "text": "According to its statutes, it consists of 15 to 25 Swiss-citizen members, which it coopts for a period of four years. There is no limit to the number of terms an individual member can have although a three-quarters majority of all members is required for re-election after the third term.", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 62, "text": "The leading organs of the ICRC are the Directorate and the Assembly. The Directorate is the executive body of the committee. It consists of a general director and five directors in the areas of \"Operations\", \"Human Resources\", \"Resources and Operational Support\", \"Communication\", and \"International Law and Cooperation within the Movement\". The members of the Directorate are appointed by the Assembly to serve for four years. The Assembly, consisting of all of the members of the committee, convenes on a regular basis and is responsible for defining aims, guidelines, and strategies and for supervising the financial matters of the committee. The president of the Assembly is also the president of the committee as a whole. Furthermore, the Assembly elects a five-member Assembly Council which has the authority to decide on behalf of the full Assembly in some matters. The council is also responsible for organizing the Assembly meetings and for facilitating communication between the Assembly and the Directorate.", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 63, "text": "Due to Geneva's location in the French-speaking part of Switzerland, the ICRC usually acts under its French name Comité international de la Croix-Rouge (CICR). The official symbol of the ICRC is the Red Cross on white background with the words \"COMITE INTERNATIONAL GENEVE\" circling the cross.", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 64, "text": "The 2023 budget of the ICRC amounts to 2.5 billion Swiss francs. Most of that money comes from states, including Switzerland in its capacity as the depositary state of the Geneva Conventions, from national Red Cross societies, the signatory states of the Geneva Conventions, and from international organizations like the European Union. All payments to the ICRC are voluntary and are received as donations based on two types of appeals issued by the committee: an annual Headquarters Appeal to cover its internal costs and Emergency Appeals for its individual missions. In 2023, Ukraine is the ICRC's biggest humanitarian operation (at 316.5 million Swiss francs), followed by Afghanistan (218 million francs) and Syria (171.7 million francs).", "title": "International Committee of the Red Cross (ICRC)" }, { "paragraph_id": 65, "text": "The International Federation of Red Cross and Red Crescent Societies coordinates cooperation between national Red Cross and Red Crescent societies throughout the world and supports the foundation of new national societies in countries where no official society exists. On the international stage, the IFRC organizes and leads relief assistance missions after emergencies such as natural disasters, manmade disasters, epidemics, mass refugee flights, and other emergencies. As per the 1997 Seville Agreement, the IFRC is the Lead Agency of the Movement in any emergency situation which does not take place as part of an armed conflict. The IFRC cooperates with the national societies of those countries affected – each called the Operating National Society (ONS) – as well as the national societies of other countries willing to offer assistance – called Participating National Societies (PNS). Among the 187 national societies admitted to the General Assembly of the International Federation as full members or observers, about 25–30 regularly work as PNS in other countries. The most active of those are the American Red Cross, the British Red Cross, the German Red Cross, and the Red Cross societies of Sweden and Norway. Another major mission of the IFRC which has gained attention in recent years is its commitment to work towards a codified, worldwide ban on the use of land mines and to bring medical, psychological, and social support for people injured by land mines.", "title": "International Federation of Red Cross and Red Crescent Societies (IFRC)" }, { "paragraph_id": 66, "text": "The tasks of the IFRC can therefore be summarized as follows:", "title": "International Federation of Red Cross and Red Crescent Societies (IFRC)" }, { "paragraph_id": 67, "text": "The IFRC has its headquarters in Geneva. It also runs five zone offices (Africa, Americas, Asia Pacific, Europe, Middle East-North Africa), 14 permanent regional offices and has about 350 delegates in more than 60 delegations around the world. The legal basis for the work of the IFRC is its constitution. The executive body of the IFRC is a secretariat, led by a secretary general. The secretariat is supported by five divisions including \"Programme Services\", \"Humanitarian values and humanitarian diplomacy\", \"National Society and Knowledge Development\" and \"Governance and Management Services\".", "title": "International Federation of Red Cross and Red Crescent Societies (IFRC)" }, { "paragraph_id": 68, "text": "The highest decision-making body of the IFRC is its General Assembly, which convenes every two years with delegates from all of the national societies. Among other tasks, the General Assembly elects the secretary general. Between the convening of General Assemblies, the Governing Board is the leading body of the IFRC. It has the authority to make decisions for the IFRC in a number of areas. The Governing Board consists of the president and the vice presidents of the IFRC, the chairpersons of the Finance and Youth Commissions, and twenty elected representatives from national societies.", "title": "International Federation of Red Cross and Red Crescent Societies (IFRC)" }, { "paragraph_id": 69, "text": "The symbol of the IFRC is the combination of the Red Cross (left) and Red Crescent (right) on a white background surrounded by a red rectangular frame.", "title": "International Federation of Red Cross and Red Crescent Societies (IFRC)" }, { "paragraph_id": 70, "text": "The main parts of the budget of the IFRC are funded by contributions from the national societies which are members of the IFRC and through revenues from its investments. The exact amount of contributions from each member society is established by the Finance Commission and approved by the General Assembly. Any additional funding, especially for unforeseen expenses for relief assistance missions, is raised by \"appeals\" published by the IFRC and comes for voluntary donations by national societies, governments, other organizations, corporations, and individuals.", "title": "International Federation of Red Cross and Red Crescent Societies (IFRC)" }, { "paragraph_id": 71, "text": "National Red Cross and Red Crescent societies exist in nearly every country in the world. Within their home country, they take on the duties and responsibilities of a national relief society as defined by International Humanitarian Law. Within the Movement, the ICRC is responsible for legally recognizing a relief society as an official national Red Cross or Red Crescent society. The exact rules for recognition are defined in the statutes of the Movement. Article 4 of these statutes contains the \"Conditions for recognition of National Societies\":", "title": "National Societies" }, { "paragraph_id": 72, "text": "Once a National Society has been recognized by the ICRC as a component of the International Red Cross and Red Crescent Movement (the Movement), it is in principle admitted to the International Federation of Red Cross and Red Crescent Societies in accordance with the terms defined in the Constitution and Rules of Procedure of the International Federation.", "title": "National Societies" }, { "paragraph_id": 73, "text": "Today, there are 192 National Societies recognized within the Movement and which are members of the International Federation.", "title": "National Societies" }, { "paragraph_id": 74, "text": "The most recent National Societies to have been recognized within the Movement are the Maldives Red Crescent Society (9 November 2011), the Cyprus Red Cross Society, the South Sudan Red Cross Society (12 November 2013) and, the last, the Tuvalu Red Cross Society (on 1 March 2016).", "title": "National Societies" }, { "paragraph_id": 75, "text": "Despite formal independence regarding its organizational structure and work, each national society is still bound by the laws of its home country. In many countries, national Red Cross and Red Crescent societies enjoy exceptional privileges due to agreements with their governments or specific \"Red Cross Laws\" granting full independence as required by the International Movement. The duties and responsibilities of a national society as defined by International Humanitarian Law and the statutes of the Movement include humanitarian aid in armed conflicts and emergency crises such as natural disasters through activities such as Restoring Family Links.", "title": "National Societies" }, { "paragraph_id": 76, "text": "Depending on their respective human, technical, financial, and organizational resources, many national societies take on additional humanitarian tasks within their home countries such as blood donation services or acting as civilian Emergency Medical Service (EMS) providers. The ICRC and the International Federation cooperate with the national societies in their international missions, especially with human, material, and financial resources and organizing on-site logistics.", "title": "National Societies" }, { "paragraph_id": 77, "text": "The Russian Red Cross supports the organisation Myvmeste, which supports the Russian Army with their 2022 Russian invasion of Ukraine through their \"Everything for victory\" fund.", "title": "National Societies" }, { "paragraph_id": 78, "text": "During the Hamas-Israel war, the IFRC in particular called for humanitarian access across Gaza and West Bank, the release of hostages, the protection of civilians, hospitals and humanitarian workers from indiscriminate attack and compliance with international humanitarian law to ensure its continued activites in the occupied Palestinian territories.", "title": "National Societies" }, { "paragraph_id": 79, "text": "The Red Cross, Red Crescent and Red Crystal emblems are officially recognized by the movement. De jure, the Red Lion and Sun emblem is also an official emblem, though it has fallen to disuse. Various other countries have also lobbied for alternative symbols, which have been rejected because of concerns of territorialism.", "title": "History of the emblems" }, { "paragraph_id": 80, "text": "The Red Cross emblem was officially approved in Geneva in 1863.", "title": "History of the emblems" }, { "paragraph_id": 81, "text": "The Red Cross flag is not to be confused with the Saint George's Cross depicted on the flags of England, Barcelona, Georgia, Freiburg im Breisgau, and several other places. In order to avoid this confusion the protected symbol is sometimes referred to as the \"Greek Red Cross\"; that term is also used in United States law to describe the Red Cross. The red cross of the Saint George cross extends to the edge of the flag, whereas the red cross on the Red Cross flag does not.", "title": "History of the emblems" }, { "paragraph_id": 82, "text": "The Red Cross flag is the colour-switched version of the Flag of Switzerland, in recognition of \"the pioneering work of Swiss citizens in establishing internationally recognized standards for the protection of wounded combatants and military medical facilities\". In 1906, to put an end to the argument of the Ottoman Empire that the flag took its roots from Christianity, it was decided officially to promote the idea that the Red Cross flag had been formed by reversing the federal colours of Switzerland, although no written evidence of this origin had ever been found.", "title": "History of the emblems" }, { "paragraph_id": 83, "text": "The 1899 convention signed at the Hague extended the use of the Red Cross flag to naval ensigns, requiring that \"all hospital ships shall make themselves known by hoisting, together with their national flag, the white flag with a red cross provided by the Geneva Convention\".", "title": "History of the emblems" }, { "paragraph_id": 84, "text": "The Red Crescent emblem was first used by ICRC volunteers during the armed conflict of 1876–1878 between the Ottoman Empire and the Russian Empire. The symbol was officially adopted in 1929, and so far 33 states in the Muslim world have recognized it. In common with the official promotion of the red cross symbol as a colour-reversal of the Swiss flag (rather than a religious symbol), the red crescent is similarly presented as being derived from a colour-reversal of the flag of the Ottoman Empire.", "title": "History of the emblems" }, { "paragraph_id": 85, "text": "The International Committee of the Red Cross (ICRC) was concerned with the possibility that the two previous symbols (Red Cross and Red Crescent) were conveying religious meanings which would not be compatible with, for example, a majority Hindu or Buddhist country from the Asia-Pacific region, where the majority did not associate with these symbols. Therefore, in 1992, the then-president Cornelio Sommaruga decided that a third, more neutral symbol was required.", "title": "History of the emblems" }, { "paragraph_id": 86, "text": "On 8 December 2005, in response to growing pressure to accommodate Magen David Adom (MDA), Israel's national emergency medical, disaster, ambulance, and blood bank service, as a full member of the Red Cross and Red Crescent movement, a new emblem (officially the Third Protocol Emblem, but more commonly known as the Red Crystal) was adopted by an amendment of the Geneva Conventions known as Protocol III, fulfilling Sommaruga's suggestion.", "title": "History of the emblems" }, { "paragraph_id": 87, "text": "The Crystal can be found on official buildings and occasionally in the field. This symbolises equality and has no political, religious, or geographical connotations, thus allowing any country not comfortable with the symbolism of the original two flags to join the movement.", "title": "History of the emblems" }, { "paragraph_id": 88, "text": "The Red Lion and Sun Society of Iran was established in 1922 and admitted to the Red Cross and Red Crescent movement in 1923. The symbol was introduced at Geneva in 1864, as a counter example to the crescent and cross used by two of Iran's rivals, the Ottoman and the Russian empires. Although that claim is inconsistent with the Red Crescent's history, that history also suggests that the Red Lion and Sun, like the Red Crescent, may have been conceived during the 1877–1878 war between Russia and Turkey.", "title": "History of the emblems" }, { "paragraph_id": 89, "text": "Due to the emblem's association with the Iranian monarchy, the Islamic Republic of Iran replaced the Red Lion and Sun with the Red Crescent in 1980, consistent with two existing Red Cross and Red Crescent symbols. Although the Red Lion and Sun has now fallen into disuse, Iran has in the past reserved the right to take it up again at any time; the Geneva Conventions continue to recognize it as an official emblem, and that status was confirmed by Protocol III in 2005 even as it added the Red Crystal.", "title": "History of the emblems" }, { "paragraph_id": 90, "text": "For over 50 years, Israel requested the addition of a red Star of David, arguing that since Christian and Muslim emblems were recognized, the corresponding Jewish emblem should be as well. This emblem has been used by Magen David Adom (MDA), or Red Star of David, but it is not recognized by the Geneva Conventions as a protected symbol. The Red Star of David is not recognized as a protected symbol outside Israel; instead the MDA uses the Red Crystal emblem during international operations in order to ensure protection. Depending on the circumstances, it may place the Red Star of David inside the Red Crystal, or use the Red Crystal alone.", "title": "History of the emblems" }, { "paragraph_id": 91, "text": "In her March 2000 letter to the International Herald Tribune and the New York Times, Bernadine Healy, then president of the American Red Cross, wrote: \"The international committee's feared proliferation of symbols is a pitiful fig leaf, used for decades as the reason for excluding the Magen David Adom—the Shield (or Star) of David.\" In protest, the American Red Cross withheld millions in administrative funding to the International Federation of Red Cross and Red Crescent Societies since May 2000.", "title": "History of the emblems" }, { "paragraph_id": 92, "text": "In 1922, a Red Swastika Society was formed in China during the Warlord era. The swastika is used in the Indian subcontinent, East, and Southeast Asia as a symbol to represent Dharma or Hinduism, Buddhism, and Jainism in general. While the organization has organized philanthropic relief projects (both domestic and international), as a sectarian religious body it is ineligible for recognition from the International Committee.", "title": "History of the emblems" }, { "paragraph_id": 93, "text": "The Australian TV network ABC and the indigenous rights group Rettet die Naturvölker released a documentary called Blood on the Cross in 1999. It alleged the involvement of the Red Cross with the British and Indonesian military in a massacre in the Southern Highlands of Western New Guinea during the World Wildlife Fund's Mapenduma hostage crisis of May 1996, when Western and Indonesian activists were held hostage by separatists.", "title": "Hostage crisis allegations" }, { "paragraph_id": 94, "text": "Following the broadcast of the documentary, the Red Cross announced publicly that it would appoint an individual outside the organization to investigate the allegations made in the film and any responsibility on its part. Piotr Obuchowicz was appointed to investigate the matter. The report categorically states that the Red Cross personnel accused of involvement were proven not to have been present; that a white helicopter was probably used in a military operation, but the helicopter was not a Red Cross helicopter, and must have been painted by one of several military organizations operating in the region at the time. Perhaps the Red Cross logo itself was also used, although no hard evidence was found for this; that this was part of the military operation to free the hostages, but was clearly intended to achieve surprise by deceiving the local people into thinking that a Red Cross helicopter was landing; and that the Red Cross should have responded more quickly and thoroughly to investigate the allegations than it did.", "title": "Hostage crisis allegations" }, { "paragraph_id": 95, "text": "46°13′40″N 6°8′14″E / 46.22778°N 6.13722°E / 46.22778; 6.13722", "title": "External links" } ]
The organized International Red Cross and Red Crescent Movement is a humanitarian movement with approximately 16 million volunteers, members, and staff worldwide. It was founded to protect human life and health, to ensure respect for all human beings, and to prevent and alleviate human suffering. Within it there are three distinct organisations that are legally independent from each other, but are united within the movement through common basic principles, objectives, symbols, statutes, and governing organisations.
2002-01-08T14:52:06Z
2023-12-28T21:18:59Z
[ "Template:Blockquote", "Template:Cite news", "Template:Webarchive", "Template:ISSN", "Template:Authority control", "Template:Short description", "Template:Infobox organization", "Template:Flag", "Template:Anchor", "Template:Refbegin", "Template:Prince of Asturias Award for International Cooperation", "Template:Cite letter", "Template:Efn", "Template:Harvnb", "Template:Cite report", "Template:ISBN", "Template:Dodis", "Template:Humanitarian Aid", "Template:Coord", "Template:Redirect", "Template:Citation needed", "Template:Fact", "Template:Main", "Template:Red Cross Red Crescent Movement", "Template:Subject bar", "Template:Insufficient inline citations", "Template:Sfn", "Template:Further", "Template:Unreferenced section", "Template:Cite web", "Template:Refend", "Template:Div col", "Template:Div col end", "Template:Reflist", "Template:Cite journal", "Template:Cite book", "Template:Use dmy dates", "Template:Notelist", "Template:Cite thesis" ]
https://en.wikipedia.org/wiki/International_Red_Cross_and_Red_Crescent_Movement
15,489
Ira Gershwin
Ira Gershwin (born Israel Gershovitz; December 6, 1896 – August 17, 1983) was an American lyricist who collaborated with his younger brother, composer George Gershwin, to create some of the most memorable songs in the English language of the 20th century. With George, he wrote more than a dozen Broadway shows, featuring songs such as "I Got Rhythm", "Embraceable You", "The Man I Love" and "Someone to Watch Over Me". He was also responsible, along with DuBose Heyward, for the libretto to George's opera Porgy and Bess. The success the Gershwin brothers had with their collaborative works has often overshadowed the creative role that Ira played. His mastery of songwriting continued after George's early death in 1937. Ira wrote additional hit songs with composers Jerome Kern, Kurt Weill, Harry Warren and Harold Arlen. His critically acclaimed 1959 book Lyrics on Several Occasions, an amalgam of autobiography and annotated anthology, is an important source for studying the art of the lyricist in the golden age of American popular song. Gershwin was born at 60 Eldridge St in Manhattan, the oldest of four children of Morris (Moishe) and Rose Gershovitz (née Rosa Bruskin), who were Russian Jews from Saint Petersburg and who had emigrated to the United States in 1891. Ira's siblings were George (Jacob, b. 1898), Arthur (b. 1900), and Frances (b. 1906). Morris changed the family name to "Gershwine" (or alternatively "Gershvin") well before their children rose to fame; it was not spelled "Gershwin" until later. Shy in his youth, Ira spent much of his time at home reading, but from grammar school through college, he played a prominent part in several school newspapers and magazines. He graduated in 1914 from Townsend Harris High School, a public school for intellectually gifted students, where he met Yip Harburg, with whom he enjoyed a lifelong friendship and a love of Gilbert and Sullivan. He attended the City College of New York but dropped out. The childhood home of Ira and George Gershwin was in the center of the Yiddish Theater District, on the second floor at 91 Second Avenue, between East 5th Street and East 6th Street. They frequented the local Yiddish theaters. While George began composing and "plugging" in Tin Pan Alley from the age of 18, Ira worked as a cashier in his father's Turkish baths. It was not until 1921 that Ira became involved in the music business. Alex Aarons signed Ira to write the songs for his next show, Two Little Girls in Blue, ultimately produced by Abraham Erlanger, along with co-composers Vincent Youmans and Paul Lannin. So as not to appear to trade off George's growing reputation, Ira wrote under the pseudonym "Arthur Francis", after his youngest two siblings. His lyrics were well received, allowing him successfully to enter the show-business world with just one show. Later the same year, the Gershwins collaborated for the first time on a score; this was for A Dangerous Maid, which played in Atlantic City and on tour. It was not until 1924 that Ira and George teamed up to write the music for what became their first Broadway hit Lady, Be Good. Once the brothers joined forces, their combined talents became one of the most influential forces in the history of American Musical Theatre. "When the Gershwins teamed up to write songs for Lady, Be Good, the American musical found its native idiom." Together, they wrote the music for more than 12 shows and four films. Some of their more famous works include "The Man I Love", "Fascinating Rhythm", "Someone to Watch Over Me", "I Got Rhythm" and "They Can't Take That Away from Me". Their partnership continued until George's sudden death from a brain tumor in 1937. Following his brother's death, Ira waited nearly three years before writing again. After this temporary retirement, Ira teamed up with accomplished composers such as Jerome Kern (Cover Girl); Kurt Weill (Where Do We Go from Here?; Lady in the Dark); and Harold Arlen (Life Begins at 8:40; A Star Is Born). Over the next 14 years, Gershwin continued to write the lyrics for many film scores and a few Broadway shows. But the failure of Park Avenue in 1946 (a "smart" show about divorce, co-written with composer Arthur Schwartz) was his farewell to Broadway. As he wrote at the time, "Am reading a couple of stories for possible musicalization (if there is such a word) but I hope I don't like them as I think I deserve a long rest." In 1947, he took 11 songs George had written but never used, provided them with new lyrics, and incorporated them into the Betty Grable film The Shocking Miss Pilgrim. He later wrote comic lyrics for Billy Wilder's 1964 movie Kiss Me, Stupid, although most critics believe his final major work was for the 1954 Judy Garland film A Star Is Born. American singer, pianist and musical historian Michael Feinstein worked for Gershwin in the lyricist's latter years, helping him with his archive. Several lost musical treasures were unearthed during this period, and Feinstein performed some of the material. Feinstein's book The Gershwins and Me: A Personal History in Twelve Songs about working for Ira, and George and Ira's music, was published in 2012. According to a 1999 story in Vanity Fair, Ira Gershwin's love for loud music was as great as his wife's loathing of it. When Debby Boone—daughter-in-law of his neighbor Rosemary Clooney—returned from Japan with one of the first Sony Walkmans (utilizing cassette tape), Clooney gave it to Michael Feinstein to give to Ira, "so he could crank it in his ears, you know. And he said, 'This is absolutely wonderful!' And he called his broker and bought Sony stock!" Gershwin married Leonore (née Strunsky) in 1926. He died of heart disease in Beverly Hills, California, on 17 August 1983 at the age of 86. He is interred at Westchester Hills Cemetery, Hastings-on-Hudson, New York. Leonore died in 1991. Three of Ira Gershwin's songs ("They Can't Take That Away From Me" (1937), "Long Ago (and Far Away)" (1944) and "The Man That Got Away" (1954)) were nominated for an Academy Award for Best Original Song, though none won. Along with George S Kaufman and Morrie Ryskind, he was a recipient of the 1932 Pulitzer Prize for Drama for Of Thee I Sing. In 1988 UCLA established The George and Ira Gershwin Lifetime Musical Achievement Award in recognition of the brothers' contribution to music, and for their gift to UCLA of the fight song "Strike Up the Band for UCLA". Recipients include Angela Lansbury (1988), Ray Charles (1991), Mel Tormé (1994), Bernadette Peters (1995), Frank Sinatra (2000), Stevie Wonder (2002), k.d. lang (2003), James Taylor (2004), Babyface (2005), Burt Bacharach (2006), Quincy Jones (2007), Lionel Richie (2008) and Julie Andrews (2009). Ira Gershwin was a joyous listener to the sounds of the modern world. "He had a sharp eye and ear for the minutiae of living." He noted in a diary: "Heard in a day: An elevator's purr, telephone's ring, telephone's buzz, a baby's moans, a shout of delight, a screech from a 'flat wheel', hoarse honks, a hoarse voice, a tinkle, a match scratch on sandpaper, a deep resounding boom of dynamiting in the impending subway, iron hooks on the gutter." In 1987, Ira's widow, Leonore, established the Ira Gershwin Literacy Center at University Settlement, a century-old institution at 185 Eldridge Street on the Lower East Side, New York City. The center is designed to give English-language programs to primarily Hispanic and Chinese Americans. Ira and his younger brother George spent many after-school hours at the Settlement. The George and Ira Gershwin Collection and the Ira Gershwin Files from the Law Office of Leonard Saxe are both at the Library of Congress Music Division. The Edward Jablonski and Lawrence D. Stewart Gershwin Collection at the Harry Ransom Humanities Research Center at the University of Texas at Austin holds a number of Ira's manuscripts and other material. In 2007, the United States Library of Congress named its Prize for Popular Song after him and his brother George. Recognizing the profound and positive effect of American popular music on the world's culture, the prize will be given annually to a composer or performer whose lifetime contributions exemplify the standard of excellence associated with the Gershwins.
[ { "paragraph_id": 0, "text": "Ira Gershwin (born Israel Gershovitz; December 6, 1896 – August 17, 1983) was an American lyricist who collaborated with his younger brother, composer George Gershwin, to create some of the most memorable songs in the English language of the 20th century. With George, he wrote more than a dozen Broadway shows, featuring songs such as \"I Got Rhythm\", \"Embraceable You\", \"The Man I Love\" and \"Someone to Watch Over Me\". He was also responsible, along with DuBose Heyward, for the libretto to George's opera Porgy and Bess.", "title": "" }, { "paragraph_id": 1, "text": "The success the Gershwin brothers had with their collaborative works has often overshadowed the creative role that Ira played. His mastery of songwriting continued after George's early death in 1937. Ira wrote additional hit songs with composers Jerome Kern, Kurt Weill, Harry Warren and Harold Arlen. His critically acclaimed 1959 book Lyrics on Several Occasions, an amalgam of autobiography and annotated anthology, is an important source for studying the art of the lyricist in the golden age of American popular song.", "title": "" }, { "paragraph_id": 2, "text": "Gershwin was born at 60 Eldridge St in Manhattan, the oldest of four children of Morris (Moishe) and Rose Gershovitz (née Rosa Bruskin), who were Russian Jews from Saint Petersburg and who had emigrated to the United States in 1891. Ira's siblings were George (Jacob, b. 1898), Arthur (b. 1900), and Frances (b. 1906). Morris changed the family name to \"Gershwine\" (or alternatively \"Gershvin\") well before their children rose to fame; it was not spelled \"Gershwin\" until later. Shy in his youth, Ira spent much of his time at home reading, but from grammar school through college, he played a prominent part in several school newspapers and magazines.", "title": "Life and career" }, { "paragraph_id": 3, "text": "He graduated in 1914 from Townsend Harris High School, a public school for intellectually gifted students, where he met Yip Harburg, with whom he enjoyed a lifelong friendship and a love of Gilbert and Sullivan. He attended the City College of New York but dropped out.", "title": "Life and career" }, { "paragraph_id": 4, "text": "The childhood home of Ira and George Gershwin was in the center of the Yiddish Theater District, on the second floor at 91 Second Avenue, between East 5th Street and East 6th Street. They frequented the local Yiddish theaters.", "title": "Life and career" }, { "paragraph_id": 5, "text": "While George began composing and \"plugging\" in Tin Pan Alley from the age of 18, Ira worked as a cashier in his father's Turkish baths. It was not until 1921 that Ira became involved in the music business. Alex Aarons signed Ira to write the songs for his next show, Two Little Girls in Blue, ultimately produced by Abraham Erlanger, along with co-composers Vincent Youmans and Paul Lannin. So as not to appear to trade off George's growing reputation, Ira wrote under the pseudonym \"Arthur Francis\", after his youngest two siblings. His lyrics were well received, allowing him successfully to enter the show-business world with just one show. Later the same year, the Gershwins collaborated for the first time on a score; this was for A Dangerous Maid, which played in Atlantic City and on tour.", "title": "Life and career" }, { "paragraph_id": 6, "text": "It was not until 1924 that Ira and George teamed up to write the music for what became their first Broadway hit Lady, Be Good. Once the brothers joined forces, their combined talents became one of the most influential forces in the history of American Musical Theatre. \"When the Gershwins teamed up to write songs for Lady, Be Good, the American musical found its native idiom.\" Together, they wrote the music for more than 12 shows and four films. Some of their more famous works include \"The Man I Love\", \"Fascinating Rhythm\", \"Someone to Watch Over Me\", \"I Got Rhythm\" and \"They Can't Take That Away from Me\". Their partnership continued until George's sudden death from a brain tumor in 1937. Following his brother's death, Ira waited nearly three years before writing again.", "title": "Life and career" }, { "paragraph_id": 7, "text": "After this temporary retirement, Ira teamed up with accomplished composers such as Jerome Kern (Cover Girl); Kurt Weill (Where Do We Go from Here?; Lady in the Dark); and Harold Arlen (Life Begins at 8:40; A Star Is Born). Over the next 14 years, Gershwin continued to write the lyrics for many film scores and a few Broadway shows. But the failure of Park Avenue in 1946 (a \"smart\" show about divorce, co-written with composer Arthur Schwartz) was his farewell to Broadway. As he wrote at the time, \"Am reading a couple of stories for possible musicalization (if there is such a word) but I hope I don't like them as I think I deserve a long rest.\"", "title": "Life and career" }, { "paragraph_id": 8, "text": "In 1947, he took 11 songs George had written but never used, provided them with new lyrics, and incorporated them into the Betty Grable film The Shocking Miss Pilgrim. He later wrote comic lyrics for Billy Wilder's 1964 movie Kiss Me, Stupid, although most critics believe his final major work was for the 1954 Judy Garland film A Star Is Born.", "title": "Life and career" }, { "paragraph_id": 9, "text": "American singer, pianist and musical historian Michael Feinstein worked for Gershwin in the lyricist's latter years, helping him with his archive. Several lost musical treasures were unearthed during this period, and Feinstein performed some of the material. Feinstein's book The Gershwins and Me: A Personal History in Twelve Songs about working for Ira, and George and Ira's music, was published in 2012.", "title": "Life and career" }, { "paragraph_id": 10, "text": "According to a 1999 story in Vanity Fair, Ira Gershwin's love for loud music was as great as his wife's loathing of it. When Debby Boone—daughter-in-law of his neighbor Rosemary Clooney—returned from Japan with one of the first Sony Walkmans (utilizing cassette tape), Clooney gave it to Michael Feinstein to give to Ira, \"so he could crank it in his ears, you know. And he said, 'This is absolutely wonderful!' And he called his broker and bought Sony stock!\"", "title": "Life and career" }, { "paragraph_id": 11, "text": "Gershwin married Leonore (née Strunsky) in 1926. He died of heart disease in Beverly Hills, California, on 17 August 1983 at the age of 86. He is interred at Westchester Hills Cemetery, Hastings-on-Hudson, New York. Leonore died in 1991.", "title": "Personal life" }, { "paragraph_id": 12, "text": "Three of Ira Gershwin's songs (\"They Can't Take That Away From Me\" (1937), \"Long Ago (and Far Away)\" (1944) and \"The Man That Got Away\" (1954)) were nominated for an Academy Award for Best Original Song, though none won.", "title": "Awards and honors" }, { "paragraph_id": 13, "text": "Along with George S Kaufman and Morrie Ryskind, he was a recipient of the 1932 Pulitzer Prize for Drama for Of Thee I Sing.", "title": "Awards and honors" }, { "paragraph_id": 14, "text": "In 1988 UCLA established The George and Ira Gershwin Lifetime Musical Achievement Award in recognition of the brothers' contribution to music, and for their gift to UCLA of the fight song \"Strike Up the Band for UCLA\". Recipients include Angela Lansbury (1988), Ray Charles (1991), Mel Tormé (1994), Bernadette Peters (1995), Frank Sinatra (2000), Stevie Wonder (2002), k.d. lang (2003), James Taylor (2004), Babyface (2005), Burt Bacharach (2006), Quincy Jones (2007), Lionel Richie (2008) and Julie Andrews (2009).", "title": "Awards and honors" }, { "paragraph_id": 15, "text": "Ira Gershwin was a joyous listener to the sounds of the modern world. \"He had a sharp eye and ear for the minutiae of living.\" He noted in a diary: \"Heard in a day: An elevator's purr, telephone's ring, telephone's buzz, a baby's moans, a shout of delight, a screech from a 'flat wheel', hoarse honks, a hoarse voice, a tinkle, a match scratch on sandpaper, a deep resounding boom of dynamiting in the impending subway, iron hooks on the gutter.\"", "title": "Legacy" }, { "paragraph_id": 16, "text": "In 1987, Ira's widow, Leonore, established the Ira Gershwin Literacy Center at University Settlement, a century-old institution at 185 Eldridge Street on the Lower East Side, New York City. The center is designed to give English-language programs to primarily Hispanic and Chinese Americans. Ira and his younger brother George spent many after-school hours at the Settlement.", "title": "Legacy" }, { "paragraph_id": 17, "text": "The George and Ira Gershwin Collection and the Ira Gershwin Files from the Law Office of Leonard Saxe are both at the Library of Congress Music Division. The Edward Jablonski and Lawrence D. Stewart Gershwin Collection at the Harry Ransom Humanities Research Center at the University of Texas at Austin holds a number of Ira's manuscripts and other material.", "title": "Legacy" }, { "paragraph_id": 18, "text": "In 2007, the United States Library of Congress named its Prize for Popular Song after him and his brother George. Recognizing the profound and positive effect of American popular music on the world's culture, the prize will be given annually to a composer or performer whose lifetime contributions exemplify the standard of excellence associated with the Gershwins.", "title": "Legacy" } ]
Ira Gershwin was an American lyricist who collaborated with his younger brother, composer George Gershwin, to create some of the most memorable songs in the English language of the 20th century. With George, he wrote more than a dozen Broadway shows, featuring songs such as "I Got Rhythm", "Embraceable You", "The Man I Love" and "Someone to Watch Over Me". He was also responsible, along with DuBose Heyward, for the libretto to George's opera Porgy and Bess. The success the Gershwin brothers had with their collaborative works has often overshadowed the creative role that Ira played. His mastery of songwriting continued after George's early death in 1937. Ira wrote additional hit songs with composers Jerome Kern, Kurt Weill, Harry Warren and Harold Arlen. His critically acclaimed 1959 book Lyrics on Several Occasions, an amalgam of autobiography and annotated anthology, is an important source for studying the art of the lyricist in the golden age of American popular song.
2002-01-08T18:29:01Z
2023-12-12T12:15:25Z
[ "Template:Use mdy dates", "Template:Cite book", "Template:ISBN", "Template:Cite web", "Template:Girl Crazy", "Template:Short description", "Template:Cite news", "Template:Webarchive", "Template:Archival records", "Template:George Gershwin", "Template:Gershwins", "Template:Authority control", "Template:Infobox musical artist", "Template:Cite encyclopedia", "Template:Cite magazine", "Template:Wikiquote", "Template:IBDB name", "Template:PulitzerPrize DramaAuthors 1926-1950", "Template:Reflist", "Template:Official website", "Template:IMDb name", "Template:Find a Grave", "Template:Porgy and Bess" ]
https://en.wikipedia.org/wiki/Ira_Gershwin
15,490
Indus River
The Indus (/ˈɪndəs/ IN-dəs) is a transboundary river of Asia and a trans-Himalayan river of South and Central Asia. The 3,120 km (1,940 mi) river rises in mountain springs northeast of Mount Kailash in Western Tibet, flows northwest through the disputed region of Kashmir, bends sharply to the left after the Nanga Parbat massif, and flows south-by-southwest through Pakistan, before emptying into the Arabian Sea near the port city of Karachi. The river has a total drainage area of circa 1,120,000 km (430,000 sq mi). Its estimated annual flow is around 243 km (58 cu mi), making it one of the 50 largest rivers in the world in terms of average annual flow. Its left-bank tributary in Ladakh is the Zanskar River, and its left-bank tributary in the plains is the Panjnad River which is formed by the successive confluences of the five Punjab rivers, namely the Chenab, Jhelum, Ravi, Beas, and Sutlej rivers. Its principal right-bank tributaries are the Shyok, Gilgit, Kabul, Kurram, and Gomal rivers. Beginning in a mountain spring and fed with glaciers and rivers in the Himalayan, Karakoram, and Hindu Kush ranges, the river supports the ecosystems of temperate forests, plains, and arid countryside. The northern part of the Indus Valley, with its tributaries, forms the Punjab region of South Asia, while the lower course of the river ends in a large delta in the southern Sindh province of Pakistan. The river has historically been important to many cultures of the region. The 3rd millennium BCE saw the rise of Indus Valley Civilisation, a major urban civilization of the Bronze Age. During the 2nd millennium BCE, the Punjab region was mentioned in the Rigveda hymns as Sapta Sindhu and in the Avesta religious texts as Saptha Hindu (both terms meaning "seven rivers"). Early historical kingdoms that arose in the Indus Valley include Gandhāra, and the Ror dynasty of Sauvīra. The Indus River came into the knowledge of the Western world early in the classical period, when King Darius of Persia sent his Greek subject Scylax of Caryanda to explore the river, c. 515 BCE. This river was known to the ancient Indians in Sanskrit as Sindhu and the Persians as Hindu which was regarded by both of them as "the border river". The variation between the two names is explained by the Old Iranian sound change *s > h, which occurred between 850 and 600 BCE according to Asko Parpola. From the Persian Achaemenid Empire, the name passed to the Greeks as Indós (Ἰνδός). It was adopted by the Romans as Indus. The name India is derived from Indus. The Ladakhis and Tibetans call the river Senge Tsangpo (སེང་གེ་གཙང་པོ།), Baltis call it Gemtsuh and Tsuh-Fo, Pashtuns call it Nilab, Sher Darya and Abbasin, while Sindhis call it Mehran, Purali and Samundar. The modern name in Urdu and Hindi is Sindh (Urdu: سِنْدھ, Hindi: सिंध), a semi-learned borrowing from Sanskrit. The Indus River provides key water resources for Pakistan's economy – especially the breadbasket of Punjab province, which accounts for most of the nation's agricultural production, and Sindh. The word Punjab means "land of five rivers" and the five rivers are Jhelum, Chenab, Ravi, Beas and Sutlej, all of which finally flow into the Indus. The Indus also supports many heavy industries and provides the main supply of potable water in Pakistan. The total length of the river varies in different sources. The length used in this article is 3,180 km (1,980 mi), taken from the Himalayan Climate and Water Atlas (2015). Historically, the 1909 The Imperial Gazetteer of India gave it as "just over 1,800 miles". A shorter figure of 2,880 km (1,790 mi) has been widely used in modern sources, as has the one of 3,180 km (1,980 mi). The modern Encyclopedia Britannica was originally published in 1999 with the shorter measurement, but was updated in 2015 to use the longer measurement. Both lengths are commonly found in modern publications; in some cases, both measurements can be found within the same work. An extended figure of circa 3,600 km (2,200 mi) was announced by a Chinese research group in 2011, based on a comprehensive remeasurement from satellite imagery, and a ground expedition to identify an alternative source point, but detailed analysis has not yet been published. The ultimate source of the Indus is in Tibet, but there is some debate about the exact source. The traditional source of the river is the Sênggê Kanbab (Sênggê Zangbo) or "Lion's Mouth", a perennial spring not far from the sacred Mount Kailash, marked by a long low line of Tibetan chortens. There are several other tributaries nearby, which may form a longer stream than Sênggê Kanbab, but unlike the Sênggê Kanbab, are all dependent on snowmelt. The Zanskar River, which flows into the Indus in Ladakh, has a greater volume of water than the Indus itself before that point. An alternative reckoning begins the river around 300 km further upstream, at the confluence of the Sengge Zangbo and Gar Tsangpo rivers, which drain the Nganglong Kangri and Gangdise Shan (Gang Rinpoche, Mt. Kailash) mountain ranges. The 2011 remeasurement suggested the source was a small lake northeast of Mount Kailash, rather than either of the two points previously used. The Indus then flows northwest through Ladakh (Indian-administered Kashmir) and Baltistan and Gilgit (Pakistan-administered Kashmir), just south of the Karakoram range. The Shyok, Shigar and Gilgit rivers carry glacial waters into the main river. It gradually bends to the south and descends into the Punjab plains at Kalabagh, Pakistan. The Indus passes gigantic gorges 4,500–5,200 metres (15,000–17,000 ft) deep near the Nanga Parbat massif. It flows swiftly across Hazara and is dammed at the Tarbela Reservoir. The Kabul River joins it near Attock. The remainder of its route to the sea is in the plains of the Punjab and Sindh, where the flow of the river becomes slow and highly braided. It is joined by the Panjnad at Mithankot. Beyond this confluence, the river, at one time, was named the Satnad River (sat = "seven", nadī = "river"), as the river now carried the waters of the Kabul River, the Indus River and the five Punjab rivers. Passing by Jamshoro, it ends in a large delta to the South of Thatta in the Sindh province of Pakistan. The Indus is one of the few rivers in the world to exhibit a tidal bore. The Indus system is largely fed by the snow and glaciers of the Himalayas, Karakoram and the Hindu Kush ranges. The flow of the river is also determined by the seasons – it diminishes greatly in the winter while flooding its banks in the monsoon months from July to September. There is also evidence of a steady shift in the course of the river since prehistoric times – it deviated westwards from flowing into the Rann of Kutch and adjoining Banni grasslands after the 1816 earthquake. As of 2011, Indus water flows in to the Rann of Kutch during its floods breaching flood banks. The major cities of the Indus Valley Civilisation, such as Harappa and Mohenjo-daro, date back to around 3300 BCE, and represent some of the largest human habitations of the ancient world. The Indus Valley Civilisation extended from across northeast Afghanistan to Pakistan and northwest India, with an upward reach from east of the Jhelum River to Ropar on the upper Sutlej. The coastal settlements extended from Sutkagan Dor at the Pakistan-Iran border to Kutch in modern Gujarat, India. There is an Indus site on the Amu Darya at Shortughai in northern Afghanistan, and the Indus site Alamgirpur at the Hindon River is located only 28 km (17 mi) from Delhi. To date, over 1,052 cities and settlements have been found, mainly in the general region of the Ghaggar-Hakra River and its tributaries. Among the settlements were the major urban centres of Harappa and Mohenjo-daro, as well as Lothal, Dholavira, Ganeriwala, and Rakhigarhi. Only 40 Indus Valley sites have been discovered on the Indus and its tributaries. However, it is notable that majority of the Indus script seals and inscribed objects discovered were found at sites along the Indus river. Most scholars believe that settlements of Gandhara grave culture of the early Indo-Aryans flourished in Gandhara from 1700 BCE to 600 BCE, when Mohenjo-daro and Harappa had already been abandoned. The Rigveda describes several rivers, including one named "Sindhu". The Rigvedic "Sindhu" is thought to be the present-day Indus river. It is attested 176 times in its text, 94 times in the plural, and most often used in the generic sense of "river". In the Rigveda, notably in the later hymns, the meaning of the word is narrowed to refer to the Indus river in particular, e.g. in the list of rivers mentioned in the hymn of Nadistuti sukta. The Rigvedic hymns apply a feminine gender to all the rivers mentioned therein, except for the Brahmaputra. The word "India" is derived from the Indus River. In ancient times, "India" initially referred to those regions immediately along the east bank of the Indus, where are Punjab and Sindh now but by 300 BCE, Greek writers including Herodotus and Megasthenes were applying the term to the entire subcontinent that extends much farther eastward. The lower basin of the Indus forms a natural boundary between the Iranian Plateau and the Indian subcontinent; this region embraces all or parts of the Pakistani provinces Balochistan, Khyber Pakhtunkhwa, Punjab and Sindh and the countries Afghanistan and India. The first West Eurasian empire to annex the Indus Valley was the Persian Empire, during the reign of Darius the Great. During his reign, the Greek explorer Scylax of Caryanda was commissioned to explore the course of the Indus. It was crossed by the invading armies of Alexander. Still, after his Macedonians conquered the west bank—joining it to the Hellenic world, they elected to retreat along the southern course of the river, ending Alexander's Asian campaign. Alexander's admiral Nearchus set out from the Indus Delta to explore the Persian Gulf, until reaching the Tigris River. The Indus Valley was later dominated by the Mauryan and Kushan Empires, Indo-Greek Kingdoms, Indo-Scythians and Hepthalites. Over several centuries Muslim armies of Muhammad ibn al-Qasim, Mahmud of Ghazni, Muhammad of Ghor, Timur and Babur crossed the river to invade Sindh and Punjab, providing a gateway to the Indian subcontinent. Indus is an antecedent river, meaning that it existed before the Himalayas and entrenched itself while they were rising. The Indus river feeds the Indus submarine fan, which is the second largest sediment body on Earth. It consists of around 5 million cubic kilometres of material eroded from the mountains. Studies of the sediment in the modern river indicate that the Karakoram Mountains in northern Pakistan and India are the single most important source of material, with the Himalayas providing the next largest contribution, mostly via the large rivers of the Punjab (Jhelum, Ravi, Chenab, Beas and Sutlej). Analysis of sediments from the Arabian Sea has demonstrated that before five million years ago the Indus was not connected to these Punjab rivers which instead flowed east into the Ganga and were captured after that time. Earlier work showed that sand and silt from western Tibet was reaching the Arabian Sea by 45 million years ago, implying the existence of an ancient Indus River by that time. The delta of this proto-Indus river has subsequently been found in the Katawaz Basin, on the Afghan-Pakistan border. In the Nanga Parbat region, the massive amounts of erosion due to the Indus river following the capture and rerouting through that area are thought to bring middle and lower crustal rocks to the surface. In November 2011, satellite images showed that the Indus river had re-entered India and was feeding the Great Rann of Kutch, Little Rann of Kutch and a lake near Ahmedabad known as Nal Sarovar. Heavy rains had left the river basin along with the Lake Manchar, Lake Hemal and Kalri Lake (all in modern-day Pakistan) inundated. This happened two centuries after the Indus river shifted its course westwards following the 1819 Rann of Kutch earthquake. The Induan Age at the start of the Triassic Period of geological time is named for the Indus region. Accounts of the Indus valley from the times of Alexander's campaign indicate a healthy forest cover in the region. The Mughal Emperor Babur writes of encountering rhinoceroses along its bank in his memoirs (the Baburnama). Extensive deforestation and human interference in the ecology of the Shivalik Hills has led to a marked deterioration in vegetation and growing conditions. The Indus valley regions are arid with poor vegetation. Agriculture is sustained largely due to irrigation works. The Indus river and its watershed have a rich biodiversity. It is home to around 25 amphibian species. The Indus river dolphin (Platanista indicus minor) is found only in the Indus River. It is a subspecies of the South Asian river dolphin. The Indus river dolphin formerly also occurred in the tributaries of the Indus river. According to the World Wildlife Fund it is one of the most threatened cetaceans with only about 1,816 still existing. It is threatened by habitat degradation from the construction of dams and canals, entanglement in fishing gear, and industrial water pollution. There are two otter species in the Indus River basin: the Eurasian otter in the northeastern highland sections and the smooth-coated otter elsewhere in the river basin. The smooth-coated otters in the Indus River represent a subspecies found nowhere else, the Sindh otter (Lutrogale perspicillata sindica). The Indus River basin has high diversity, being the home of more than 180 freshwater fish species, including 22 which are found nowhere else. Fish also played a major role in earlier cultures of the region, including the ancient Indus Valley Civilisation where depictions of fish were frequent. The Indus script has a commonly used fish sign, which in its various forms may simply have meant "fish", or referred to stars or gods. In the uppermost, highest part of the Indus River basin there are relatively few genera and species: Diptychus, Ptychobarbus, Schizopyge, Schizopygopsis and Schizothorax snowtrout, Triplophysa loaches, and the catfish Glyptosternon reticulatum. Going downstream these are soon joined by the golden mahseer Tor putitora (alternatively T. macrolepis, although it often is regarded as a synonym of T. putitora) and Schistura loaches. Downriver from around Thakot, Tarbela, the Kabul–Indus river confluence, Attock Khurd and Peshawar the diversity rises strongly, including many cyprinids (Amblypharyngodon, Aspidoparia, Barilius, Chela, Cirrhinus, Crossocheilus, Cyprinion, Danio, Devario, Esomus, Garra, Labeo, Naziritor, Osteobrama, Pethia, Puntius, Rasbora, Salmophasia, Securicula and Systomus), true loaches (Botia and Lepidocephalus), stone loaches (Acanthocobitis and Nemacheilus), ailiid catfish (Clupisoma), bagridae catfish (Batasio, Mystus, Rita and Sperata), airsac catfish (Heteropneustes), schilbid catfish (Eutropiichthys), silurid catfish (Ompok and Wallago), sisorid catfish (Bagarius, Gagata, Glyptothorax and Sisor), gouramis (Trichogaster), nandid leaffish (Nandus), snakeheads (Channa), spiny eel (Macrognathus and Mastacembelus), knifefish (Notopterus), glassfish (Chanda and Parambassis), clupeids (Gudusia), needlefish (Xenentodon) and gobies (Glossogobius), as well as a few introduced species. As the altitude further declines the Indus basin becomes overall quite slow-flowing as it passes through the Punjab Plain. Major carp become common, and chameleonfish (Badis), mullet (Sicamugil) and swamp eel (Monopterus) appear. In some upland lakes and tributaries of the Punjab region snow trout and mahseer are still common, but once the Indus basin reaches its lower plain the former group is absent and the latter are rare. Many of the species of the middle sections of the Indus basin are also present in the lower. Notable examples of genera that are present in the lower plain but generally not elsewhere in the Indus River basin are the Aphanius pupfish, Aplocheilus killifish, palla fish (Tenualosa ilisha), catla (Labeo catla), rohu (Labeo rohita) and Cirrhinus mrigala. The lowermost part of the river and its delta are home to freshwater fish, but also several brackish and marine species. This includes pomfret and prawns. The large delta has been recognized by conservationists as an important ecological region. Here, the river turns into many marshes, streams and creeks and meets the sea at shallow levels. Palla fish (Tenualosa ilisha) of the river is a delicacy for people living along the river. The population of fish in the river is moderately high, with Sukkur, Thatta, and Kotri being the major fishing centres – all in the lower Sindh course. As a result, damming and irrigation have made fish farming an important economic activity. The Indus is the most important supplier of water resources to the Punjab and Sindh plains – it forms the backbone of agriculture and food production in Pakistan. The river is especially critical since rainfall is meagre in the lower Indus valley. Irrigation canals were first built by the people of the Indus Valley civilisation, and later by the engineers of the Kushan Empire and the Mughal Empire. Modern irrigation was introduced by the British East India Company in 1850 – the construction of modern canals accompanied with the restoration of old canals. The British supervised the construction of one of the most complex irrigation networks in the world. The Guddu Barrage is 1,350 m (4,430 ft) long – irrigating Sukkur, Jacobabad, Larkana and Kalat. The Sukkur Barrage serves over 20,000 km (7,700 sq mi). After Pakistan came into existence, a water control treaty signed between India and Pakistan in 1960 guaranteed that Pakistan would receive water from the Indus River and its two tributaries the Jhelum River and the Chenab River independently of upstream control by India. The Indus Basin Project consisted primarily of the construction of two main dams, the Mangla Dam built on the Jhelum River and the Tarbela Dam constructed on the Indus River, together with their subsidiary dams. The Pakistan Water and Power Development Authority undertook the construction of the Chashma-Jhelum link canal – linking the waters of the Indus and Jhelum rivers – extending water supplies to the regions of Bahawalpur and Multan. Pakistan constructed the Tarbela Dam near Rawalpindi – standing 2,743 metres (9,000 ft) long and 143 metres (470 ft) high, with an 80-kilometre (50 mi) long reservoir. It supports the Chashma Barrage near Dera Ismail Khan for irrigation use and flood control and the Taunsa Barrage near Dera Ghazi Khan which also produces 100,000 kilowatts of electricity. The Kotri Barrage near Hyderabad is 915 metres (3,000 ft) long and provides additional water supplies for Karachi. The extensive linking of tributaries with the Indus has helped spread water resources to the valley of Peshawar, in the Khyber Pakhtunkhwa. The extensive irrigation and dam projects provide the basis for Pakistan's large production of crops such as cotton, sugarcane and wheat. The dams also generate electricity for heavy industries and urban centres. The Indus river is sacred to Hindus. The Sindhu Darshan Festival is held on every Guru Purnima on the banks of the Indus. The ethnicities of the Indus Valley (Pakistan and Northwest India) have a greater amount of ANI (or West Eurasian) admixture than other South Asians, including inputs from Western Steppe Herders, with evidence of more sustained and multi-layered migrations from the west. Originally, the delta used to receive almost all of the water from the Indus river, which has an annual flow of approximately 180 billion cubic metres (240×10^ cu yd), and is accompanied by 400 million tonnes (390×10^ long tons) of silt. Since the 1940s, dams, barrages and irrigation works have been constructed on the river. The Indus Basin Irrigation System is the "largest contiguous irrigation system developed over the past 140 years" anywhere in the world. This has reduced the flow of water and by 2018, the average annual flow of water below the Kotri barrage was 33 billion cubic metres (43×10^ cu yd), and annual amount of silt discharged was estimated at 100 million tonnes (98×10^ long tons). As a result, the 2010 Pakistan floods were considered "good news" for the ecosystem and population of the river delta as they brought much-needed fresh water. Any further utilization of the river basin water is not economically feasible. Vegetation and wildlife of the Indus delta are threatened by the reduced inflow of fresh water, along with extensive deforestation, industrial pollution and global warming. Damming has also isolated the delta population of Indus river dolphins from those further upstream. Large-scale diversion of the river's water for irrigation has raised far-reaching issues. Sediment clogging from poor maintenance of canals has affected agricultural production and vegetation on numerous occasions. Irrigation itself is increasing soil salinization, reducing crop yields and in some cases rendering farmland useless for cultivation. The Tibetan Plateau contains the world's third-largest store of ice. Qin Dahe, the former head of the China Meteorological Administration, said the recent fast pace of melting and warmer temperatures will be good for agriculture and tourism in the short term, but issued a strong warning: Temperatures are rising four times faster than elsewhere in China, and the Tibetan glaciers are retreating at a higher speed than in any other part of the world... In the short term, this will cause lakes to expand and bring floods and mudflows... In the long run, the glaciers are vital lifelines of the Indus River. Once they vanish, water supplies in Pakistan will be in peril. "There is insufficient data to say what will happen to the Indus," says David Grey, the World Bank's senior water advisor in South Asia. "But we all have very nasty fears that the flows of the Indus could be severely, severely affected by glacier melt as a consequence of climate change," and reduced by perhaps as much as 50 per cent. "Now what does that mean to a population that lives in a desert [where], without the river, there would be no life? I don't know the answer to that question," he says. "But we need to be concerned about that. Deeply, deeply concerned." U.S. diplomat Richard Holbrooke said, shortly before he died in 2010, that he believed that falling water levels in the Indus River "could very well precipitate World War III." Over the years factories on the banks of the Indus River have increased levels of water pollution in the river and the atmosphere around it. High levels of pollutants in the river have led to the deaths of endangered Indus river dolphin. The Sindh Environmental Protection Agency has ordered polluting factories around the river to shut down under the Pakistan Environmental Protection Act, 1997. Death of the Indus river dolphin has also been attributed to fishermen using poison to kill fish and scooping them up. As a result, the government banned fishing from Guddu Barrage to Sukkur. The Indus is second among a group of ten rivers responsible for about 90% of all the plastic that reaches the oceans. The Yangtze is the only river contributing more plastic. Frequently, Indus river is prone to moderate to severe flooding. In July 2010, following abnormally heavy monsoon rains, the Indus River rose above its banks and started flooding. The rain continued for the next two months, devastating large areas of Pakistan. In Sindh, the Indus burst its banks near Sukkur on 8 August, submerging the village of Mor Khan Jatoi. In early August, the heaviest flooding moved southward along the Indus River from severely affected northern regions toward western Punjab, where at least 1,400,000 acres (570,000 ha) of cropland was destroyed, and the southern province of Sindh. As of September 2010, over two thousand people had died and over a million homes had been destroyed since the flooding began. The 2011 Sindh floods began during the Pakistani monsoon season in mid-August 2011, resulting from heavy monsoon rains in Sindh, eastern Balochistan, and southern Punjab. The floods caused considerable damage; an estimated 434 civilians were killed, with 5.3 million people and 1,524,773 homes affected. Sindh is a fertile region and often called the "breadbasket" of the country; the damage and toll of the floods on the local agrarian economy was said to be extensive. At least 1.7 million acres (690,000 ha; 2,700 sq mi) of arable land were inundated. The flooding followed the previous year's floods, which devastated a large part of the country. Unprecedented torrential monsoon rains caused severe flooding in 16 districts of Sindh. In Pakistan currently there are six barrages on the Indus: Guddu Barrage, Sukkur Barrage, Kotri Barrage (also called Ghulam Muhammad barrage), Taunsa Barrage, Chashma Barrage and Jinnah Barrage. Another new barrage called "Sindh Barrage" is planned as a terminal barrage on the Indus River. There are some bridges on River Indus, such as Dadu Moro Bridge, Larkana Khairpur Indus River Bridge, Thatta-Sujawal bridge, Jhirk-Mula Katiar bridge and recently planned Kandhkot-Ghotki bridge. The entire left bank of Indus river in Sind province is protected from river flooding by constructing around 600 km long levees. The right bank side is also leveed from Guddu barrage to Lake Manchar. In response to the levees construction, the river has been aggrading rapidly over the last 20 years leading to breaches upstream of barrages and inundation of large areas. Tarbela Dam in Pakistan is constructed on the Indus River, while the controversial Kalabagh dam is also being constructed on Indus river. Pakistan is also building Munda Dam. Many Buddhist monasteries in Ladakh, Indus-Sarasvati Valley Civilisation sites along the banks of Indus & Sarasvati River (Ghaggar-Hakra River) and in Indus Sagar Doab, Indus River Delta, various dams such as Baglihar Dam, Sindhu Darshan Festival held every year at Leh, Sindhu Pushkaram festival held every 12 years at confluence of Indus & Zanskar River at Nimoo once every 12 years for 12 days starting from when Jupiter enter into Kumbha rasi (Aquarius), etc are tourism opportunities.
[ { "paragraph_id": 0, "text": "The Indus (/ˈɪndəs/ IN-dəs) is a transboundary river of Asia and a trans-Himalayan river of South and Central Asia. The 3,120 km (1,940 mi) river rises in mountain springs northeast of Mount Kailash in Western Tibet, flows northwest through the disputed region of Kashmir, bends sharply to the left after the Nanga Parbat massif, and flows south-by-southwest through Pakistan, before emptying into the Arabian Sea near the port city of Karachi.", "title": "" }, { "paragraph_id": 1, "text": "The river has a total drainage area of circa 1,120,000 km (430,000 sq mi). Its estimated annual flow is around 243 km (58 cu mi), making it one of the 50 largest rivers in the world in terms of average annual flow. Its left-bank tributary in Ladakh is the Zanskar River, and its left-bank tributary in the plains is the Panjnad River which is formed by the successive confluences of the five Punjab rivers, namely the Chenab, Jhelum, Ravi, Beas, and Sutlej rivers. Its principal right-bank tributaries are the Shyok, Gilgit, Kabul, Kurram, and Gomal rivers. Beginning in a mountain spring and fed with glaciers and rivers in the Himalayan, Karakoram, and Hindu Kush ranges, the river supports the ecosystems of temperate forests, plains, and arid countryside.", "title": "" }, { "paragraph_id": 2, "text": "The northern part of the Indus Valley, with its tributaries, forms the Punjab region of South Asia, while the lower course of the river ends in a large delta in the southern Sindh province of Pakistan. The river has historically been important to many cultures of the region. The 3rd millennium BCE saw the rise of Indus Valley Civilisation, a major urban civilization of the Bronze Age. During the 2nd millennium BCE, the Punjab region was mentioned in the Rigveda hymns as Sapta Sindhu and in the Avesta religious texts as Saptha Hindu (both terms meaning \"seven rivers\"). Early historical kingdoms that arose in the Indus Valley include Gandhāra, and the Ror dynasty of Sauvīra. The Indus River came into the knowledge of the Western world early in the classical period, when King Darius of Persia sent his Greek subject Scylax of Caryanda to explore the river, c. 515 BCE.", "title": "" }, { "paragraph_id": 3, "text": "This river was known to the ancient Indians in Sanskrit as Sindhu and the Persians as Hindu which was regarded by both of them as \"the border river\". The variation between the two names is explained by the Old Iranian sound change *s > h, which occurred between 850 and 600 BCE according to Asko Parpola. From the Persian Achaemenid Empire, the name passed to the Greeks as Indós (Ἰνδός). It was adopted by the Romans as Indus. The name India is derived from Indus.", "title": "Etymology and names" }, { "paragraph_id": 4, "text": "The Ladakhis and Tibetans call the river Senge Tsangpo (སེང་གེ་གཙང་པོ།), Baltis call it Gemtsuh and Tsuh-Fo, Pashtuns call it Nilab, Sher Darya and Abbasin, while Sindhis call it Mehran, Purali and Samundar.", "title": "Etymology and names" }, { "paragraph_id": 5, "text": "The modern name in Urdu and Hindi is Sindh (Urdu: سِنْدھ, Hindi: सिंध), a semi-learned borrowing from Sanskrit.", "title": "Etymology and names" }, { "paragraph_id": 6, "text": "The Indus River provides key water resources for Pakistan's economy – especially the breadbasket of Punjab province, which accounts for most of the nation's agricultural production, and Sindh. The word Punjab means \"land of five rivers\" and the five rivers are Jhelum, Chenab, Ravi, Beas and Sutlej, all of which finally flow into the Indus. The Indus also supports many heavy industries and provides the main supply of potable water in Pakistan.", "title": "Description" }, { "paragraph_id": 7, "text": "The total length of the river varies in different sources. The length used in this article is 3,180 km (1,980 mi), taken from the Himalayan Climate and Water Atlas (2015). Historically, the 1909 The Imperial Gazetteer of India gave it as \"just over 1,800 miles\". A shorter figure of 2,880 km (1,790 mi) has been widely used in modern sources, as has the one of 3,180 km (1,980 mi). The modern Encyclopedia Britannica was originally published in 1999 with the shorter measurement, but was updated in 2015 to use the longer measurement. Both lengths are commonly found in modern publications; in some cases, both measurements can be found within the same work. An extended figure of circa 3,600 km (2,200 mi) was announced by a Chinese research group in 2011, based on a comprehensive remeasurement from satellite imagery, and a ground expedition to identify an alternative source point, but detailed analysis has not yet been published.", "title": "Description" }, { "paragraph_id": 8, "text": "The ultimate source of the Indus is in Tibet, but there is some debate about the exact source. The traditional source of the river is the Sênggê Kanbab (Sênggê Zangbo) or \"Lion's Mouth\", a perennial spring not far from the sacred Mount Kailash, marked by a long low line of Tibetan chortens. There are several other tributaries nearby, which may form a longer stream than Sênggê Kanbab, but unlike the Sênggê Kanbab, are all dependent on snowmelt. The Zanskar River, which flows into the Indus in Ladakh, has a greater volume of water than the Indus itself before that point. An alternative reckoning begins the river around 300 km further upstream, at the confluence of the Sengge Zangbo and Gar Tsangpo rivers, which drain the Nganglong Kangri and Gangdise Shan (Gang Rinpoche, Mt. Kailash) mountain ranges. The 2011 remeasurement suggested the source was a small lake northeast of Mount Kailash, rather than either of the two points previously used.", "title": "Description" }, { "paragraph_id": 9, "text": "The Indus then flows northwest through Ladakh (Indian-administered Kashmir) and Baltistan and Gilgit (Pakistan-administered Kashmir), just south of the Karakoram range. The Shyok, Shigar and Gilgit rivers carry glacial waters into the main river. It gradually bends to the south and descends into the Punjab plains at Kalabagh, Pakistan. The Indus passes gigantic gorges 4,500–5,200 metres (15,000–17,000 ft) deep near the Nanga Parbat massif. It flows swiftly across Hazara and is dammed at the Tarbela Reservoir. The Kabul River joins it near Attock. The remainder of its route to the sea is in the plains of the Punjab and Sindh, where the flow of the river becomes slow and highly braided. It is joined by the Panjnad at Mithankot. Beyond this confluence, the river, at one time, was named the Satnad River (sat = \"seven\", nadī = \"river\"), as the river now carried the waters of the Kabul River, the Indus River and the five Punjab rivers. Passing by Jamshoro, it ends in a large delta to the South of Thatta in the Sindh province of Pakistan.", "title": "Description" }, { "paragraph_id": 10, "text": "The Indus is one of the few rivers in the world to exhibit a tidal bore. The Indus system is largely fed by the snow and glaciers of the Himalayas, Karakoram and the Hindu Kush ranges. The flow of the river is also determined by the seasons – it diminishes greatly in the winter while flooding its banks in the monsoon months from July to September. There is also evidence of a steady shift in the course of the river since prehistoric times – it deviated westwards from flowing into the Rann of Kutch and adjoining Banni grasslands after the 1816 earthquake. As of 2011, Indus water flows in to the Rann of Kutch during its floods breaching flood banks.", "title": "Description" }, { "paragraph_id": 11, "text": "The major cities of the Indus Valley Civilisation, such as Harappa and Mohenjo-daro, date back to around 3300 BCE, and represent some of the largest human habitations of the ancient world. The Indus Valley Civilisation extended from across northeast Afghanistan to Pakistan and northwest India, with an upward reach from east of the Jhelum River to Ropar on the upper Sutlej. The coastal settlements extended from Sutkagan Dor at the Pakistan-Iran border to Kutch in modern Gujarat, India. There is an Indus site on the Amu Darya at Shortughai in northern Afghanistan, and the Indus site Alamgirpur at the Hindon River is located only 28 km (17 mi) from Delhi. To date, over 1,052 cities and settlements have been found, mainly in the general region of the Ghaggar-Hakra River and its tributaries. Among the settlements were the major urban centres of Harappa and Mohenjo-daro, as well as Lothal, Dholavira, Ganeriwala, and Rakhigarhi. Only 40 Indus Valley sites have been discovered on the Indus and its tributaries. However, it is notable that majority of the Indus script seals and inscribed objects discovered were found at sites along the Indus river.", "title": "History" }, { "paragraph_id": 12, "text": "Most scholars believe that settlements of Gandhara grave culture of the early Indo-Aryans flourished in Gandhara from 1700 BCE to 600 BCE, when Mohenjo-daro and Harappa had already been abandoned.", "title": "History" }, { "paragraph_id": 13, "text": "The Rigveda describes several rivers, including one named \"Sindhu\". The Rigvedic \"Sindhu\" is thought to be the present-day Indus river. It is attested 176 times in its text, 94 times in the plural, and most often used in the generic sense of \"river\". In the Rigveda, notably in the later hymns, the meaning of the word is narrowed to refer to the Indus river in particular, e.g. in the list of rivers mentioned in the hymn of Nadistuti sukta. The Rigvedic hymns apply a feminine gender to all the rivers mentioned therein, except for the Brahmaputra.", "title": "History" }, { "paragraph_id": 14, "text": "The word \"India\" is derived from the Indus River. In ancient times, \"India\" initially referred to those regions immediately along the east bank of the Indus, where are Punjab and Sindh now but by 300 BCE, Greek writers including Herodotus and Megasthenes were applying the term to the entire subcontinent that extends much farther eastward.", "title": "History" }, { "paragraph_id": 15, "text": "The lower basin of the Indus forms a natural boundary between the Iranian Plateau and the Indian subcontinent; this region embraces all or parts of the Pakistani provinces Balochistan, Khyber Pakhtunkhwa, Punjab and Sindh and the countries Afghanistan and India. The first West Eurasian empire to annex the Indus Valley was the Persian Empire, during the reign of Darius the Great. During his reign, the Greek explorer Scylax of Caryanda was commissioned to explore the course of the Indus. It was crossed by the invading armies of Alexander. Still, after his Macedonians conquered the west bank—joining it to the Hellenic world, they elected to retreat along the southern course of the river, ending Alexander's Asian campaign. Alexander's admiral Nearchus set out from the Indus Delta to explore the Persian Gulf, until reaching the Tigris River. The Indus Valley was later dominated by the Mauryan and Kushan Empires, Indo-Greek Kingdoms, Indo-Scythians and Hepthalites. Over several centuries Muslim armies of Muhammad ibn al-Qasim, Mahmud of Ghazni, Muhammad of Ghor, Timur and Babur crossed the river to invade Sindh and Punjab, providing a gateway to the Indian subcontinent.", "title": "History" }, { "paragraph_id": 16, "text": "Indus is an antecedent river, meaning that it existed before the Himalayas and entrenched itself while they were rising.", "title": "Geology" }, { "paragraph_id": 17, "text": "The Indus river feeds the Indus submarine fan, which is the second largest sediment body on Earth. It consists of around 5 million cubic kilometres of material eroded from the mountains. Studies of the sediment in the modern river indicate that the Karakoram Mountains in northern Pakistan and India are the single most important source of material, with the Himalayas providing the next largest contribution, mostly via the large rivers of the Punjab (Jhelum, Ravi, Chenab, Beas and Sutlej). Analysis of sediments from the Arabian Sea has demonstrated that before five million years ago the Indus was not connected to these Punjab rivers which instead flowed east into the Ganga and were captured after that time. Earlier work showed that sand and silt from western Tibet was reaching the Arabian Sea by 45 million years ago, implying the existence of an ancient Indus River by that time. The delta of this proto-Indus river has subsequently been found in the Katawaz Basin, on the Afghan-Pakistan border.", "title": "Geology" }, { "paragraph_id": 18, "text": "In the Nanga Parbat region, the massive amounts of erosion due to the Indus river following the capture and rerouting through that area are thought to bring middle and lower crustal rocks to the surface.", "title": "Geology" }, { "paragraph_id": 19, "text": "In November 2011, satellite images showed that the Indus river had re-entered India and was feeding the Great Rann of Kutch, Little Rann of Kutch and a lake near Ahmedabad known as Nal Sarovar. Heavy rains had left the river basin along with the Lake Manchar, Lake Hemal and Kalri Lake (all in modern-day Pakistan) inundated. This happened two centuries after the Indus river shifted its course westwards following the 1819 Rann of Kutch earthquake.", "title": "Geology" }, { "paragraph_id": 20, "text": "The Induan Age at the start of the Triassic Period of geological time is named for the Indus region.", "title": "Geology" }, { "paragraph_id": 21, "text": "Accounts of the Indus valley from the times of Alexander's campaign indicate a healthy forest cover in the region. The Mughal Emperor Babur writes of encountering rhinoceroses along its bank in his memoirs (the Baburnama). Extensive deforestation and human interference in the ecology of the Shivalik Hills has led to a marked deterioration in vegetation and growing conditions. The Indus valley regions are arid with poor vegetation. Agriculture is sustained largely due to irrigation works. The Indus river and its watershed have a rich biodiversity. It is home to around 25 amphibian species.", "title": "Wildlife" }, { "paragraph_id": 22, "text": "The Indus river dolphin (Platanista indicus minor) is found only in the Indus River. It is a subspecies of the South Asian river dolphin. The Indus river dolphin formerly also occurred in the tributaries of the Indus river. According to the World Wildlife Fund it is one of the most threatened cetaceans with only about 1,816 still existing. It is threatened by habitat degradation from the construction of dams and canals, entanglement in fishing gear, and industrial water pollution.", "title": "Wildlife" }, { "paragraph_id": 23, "text": "There are two otter species in the Indus River basin: the Eurasian otter in the northeastern highland sections and the smooth-coated otter elsewhere in the river basin. The smooth-coated otters in the Indus River represent a subspecies found nowhere else, the Sindh otter (Lutrogale perspicillata sindica).", "title": "Wildlife" }, { "paragraph_id": 24, "text": "The Indus River basin has high diversity, being the home of more than 180 freshwater fish species, including 22 which are found nowhere else. Fish also played a major role in earlier cultures of the region, including the ancient Indus Valley Civilisation where depictions of fish were frequent. The Indus script has a commonly used fish sign, which in its various forms may simply have meant \"fish\", or referred to stars or gods.", "title": "Wildlife" }, { "paragraph_id": 25, "text": "In the uppermost, highest part of the Indus River basin there are relatively few genera and species: Diptychus, Ptychobarbus, Schizopyge, Schizopygopsis and Schizothorax snowtrout, Triplophysa loaches, and the catfish Glyptosternon reticulatum. Going downstream these are soon joined by the golden mahseer Tor putitora (alternatively T. macrolepis, although it often is regarded as a synonym of T. putitora) and Schistura loaches. Downriver from around Thakot, Tarbela, the Kabul–Indus river confluence, Attock Khurd and Peshawar the diversity rises strongly, including many cyprinids (Amblypharyngodon, Aspidoparia, Barilius, Chela, Cirrhinus, Crossocheilus, Cyprinion, Danio, Devario, Esomus, Garra, Labeo, Naziritor, Osteobrama, Pethia, Puntius, Rasbora, Salmophasia, Securicula and Systomus), true loaches (Botia and Lepidocephalus), stone loaches (Acanthocobitis and Nemacheilus), ailiid catfish (Clupisoma), bagridae catfish (Batasio, Mystus, Rita and Sperata), airsac catfish (Heteropneustes), schilbid catfish (Eutropiichthys), silurid catfish (Ompok and Wallago), sisorid catfish (Bagarius, Gagata, Glyptothorax and Sisor), gouramis (Trichogaster), nandid leaffish (Nandus), snakeheads (Channa), spiny eel (Macrognathus and Mastacembelus), knifefish (Notopterus), glassfish (Chanda and Parambassis), clupeids (Gudusia), needlefish (Xenentodon) and gobies (Glossogobius), as well as a few introduced species. As the altitude further declines the Indus basin becomes overall quite slow-flowing as it passes through the Punjab Plain. Major carp become common, and chameleonfish (Badis), mullet (Sicamugil) and swamp eel (Monopterus) appear. In some upland lakes and tributaries of the Punjab region snow trout and mahseer are still common, but once the Indus basin reaches its lower plain the former group is absent and the latter are rare. Many of the species of the middle sections of the Indus basin are also present in the lower. Notable examples of genera that are present in the lower plain but generally not elsewhere in the Indus River basin are the Aphanius pupfish, Aplocheilus killifish, palla fish (Tenualosa ilisha), catla (Labeo catla), rohu (Labeo rohita) and Cirrhinus mrigala. The lowermost part of the river and its delta are home to freshwater fish, but also several brackish and marine species. This includes pomfret and prawns. The large delta has been recognized by conservationists as an important ecological region. Here, the river turns into many marshes, streams and creeks and meets the sea at shallow levels.", "title": "Wildlife" }, { "paragraph_id": 26, "text": "Palla fish (Tenualosa ilisha) of the river is a delicacy for people living along the river. The population of fish in the river is moderately high, with Sukkur, Thatta, and Kotri being the major fishing centres – all in the lower Sindh course. As a result, damming and irrigation have made fish farming an important economic activity.", "title": "Wildlife" }, { "paragraph_id": 27, "text": "The Indus is the most important supplier of water resources to the Punjab and Sindh plains – it forms the backbone of agriculture and food production in Pakistan. The river is especially critical since rainfall is meagre in the lower Indus valley. Irrigation canals were first built by the people of the Indus Valley civilisation, and later by the engineers of the Kushan Empire and the Mughal Empire. Modern irrigation was introduced by the British East India Company in 1850 – the construction of modern canals accompanied with the restoration of old canals. The British supervised the construction of one of the most complex irrigation networks in the world. The Guddu Barrage is 1,350 m (4,430 ft) long – irrigating Sukkur, Jacobabad, Larkana and Kalat. The Sukkur Barrage serves over 20,000 km (7,700 sq mi).", "title": "Economy" }, { "paragraph_id": 28, "text": "After Pakistan came into existence, a water control treaty signed between India and Pakistan in 1960 guaranteed that Pakistan would receive water from the Indus River and its two tributaries the Jhelum River and the Chenab River independently of upstream control by India.", "title": "Economy" }, { "paragraph_id": 29, "text": "The Indus Basin Project consisted primarily of the construction of two main dams, the Mangla Dam built on the Jhelum River and the Tarbela Dam constructed on the Indus River, together with their subsidiary dams. The Pakistan Water and Power Development Authority undertook the construction of the Chashma-Jhelum link canal – linking the waters of the Indus and Jhelum rivers – extending water supplies to the regions of Bahawalpur and Multan. Pakistan constructed the Tarbela Dam near Rawalpindi – standing 2,743 metres (9,000 ft) long and 143 metres (470 ft) high, with an 80-kilometre (50 mi) long reservoir. It supports the Chashma Barrage near Dera Ismail Khan for irrigation use and flood control and the Taunsa Barrage near Dera Ghazi Khan which also produces 100,000 kilowatts of electricity. The Kotri Barrage near Hyderabad is 915 metres (3,000 ft) long and provides additional water supplies for Karachi. The extensive linking of tributaries with the Indus has helped spread water resources to the valley of Peshawar, in the Khyber Pakhtunkhwa. The extensive irrigation and dam projects provide the basis for Pakistan's large production of crops such as cotton, sugarcane and wheat. The dams also generate electricity for heavy industries and urban centres.", "title": "Economy" }, { "paragraph_id": 30, "text": "The Indus river is sacred to Hindus. The Sindhu Darshan Festival is held on every Guru Purnima on the banks of the Indus.", "title": "People" }, { "paragraph_id": 31, "text": "The ethnicities of the Indus Valley (Pakistan and Northwest India) have a greater amount of ANI (or West Eurasian) admixture than other South Asians, including inputs from Western Steppe Herders, with evidence of more sustained and multi-layered migrations from the west.", "title": "People" }, { "paragraph_id": 32, "text": "Originally, the delta used to receive almost all of the water from the Indus river, which has an annual flow of approximately 180 billion cubic metres (240×10^ cu yd), and is accompanied by 400 million tonnes (390×10^ long tons) of silt. Since the 1940s, dams, barrages and irrigation works have been constructed on the river. The Indus Basin Irrigation System is the \"largest contiguous irrigation system developed over the past 140 years\" anywhere in the world. This has reduced the flow of water and by 2018, the average annual flow of water below the Kotri barrage was 33 billion cubic metres (43×10^ cu yd), and annual amount of silt discharged was estimated at 100 million tonnes (98×10^ long tons). As a result, the 2010 Pakistan floods were considered \"good news\" for the ecosystem and population of the river delta as they brought much-needed fresh water. Any further utilization of the river basin water is not economically feasible.", "title": "Modern issues" }, { "paragraph_id": 33, "text": "Vegetation and wildlife of the Indus delta are threatened by the reduced inflow of fresh water, along with extensive deforestation, industrial pollution and global warming. Damming has also isolated the delta population of Indus river dolphins from those further upstream.", "title": "Modern issues" }, { "paragraph_id": 34, "text": "Large-scale diversion of the river's water for irrigation has raised far-reaching issues. Sediment clogging from poor maintenance of canals has affected agricultural production and vegetation on numerous occasions. Irrigation itself is increasing soil salinization, reducing crop yields and in some cases rendering farmland useless for cultivation.", "title": "Modern issues" }, { "paragraph_id": 35, "text": "The Tibetan Plateau contains the world's third-largest store of ice. Qin Dahe, the former head of the China Meteorological Administration, said the recent fast pace of melting and warmer temperatures will be good for agriculture and tourism in the short term, but issued a strong warning:", "title": "Modern issues" }, { "paragraph_id": 36, "text": "Temperatures are rising four times faster than elsewhere in China, and the Tibetan glaciers are retreating at a higher speed than in any other part of the world... In the short term, this will cause lakes to expand and bring floods and mudflows... In the long run, the glaciers are vital lifelines of the Indus River. Once they vanish, water supplies in Pakistan will be in peril.", "title": "Modern issues" }, { "paragraph_id": 37, "text": "\"There is insufficient data to say what will happen to the Indus,\" says David Grey, the World Bank's senior water advisor in South Asia. \"But we all have very nasty fears that the flows of the Indus could be severely, severely affected by glacier melt as a consequence of climate change,\" and reduced by perhaps as much as 50 per cent. \"Now what does that mean to a population that lives in a desert [where], without the river, there would be no life? I don't know the answer to that question,\" he says. \"But we need to be concerned about that. Deeply, deeply concerned.\"", "title": "Modern issues" }, { "paragraph_id": 38, "text": "U.S. diplomat Richard Holbrooke said, shortly before he died in 2010, that he believed that falling water levels in the Indus River \"could very well precipitate World War III.\"", "title": "Modern issues" }, { "paragraph_id": 39, "text": "Over the years factories on the banks of the Indus River have increased levels of water pollution in the river and the atmosphere around it. High levels of pollutants in the river have led to the deaths of endangered Indus river dolphin. The Sindh Environmental Protection Agency has ordered polluting factories around the river to shut down under the Pakistan Environmental Protection Act, 1997. Death of the Indus river dolphin has also been attributed to fishermen using poison to kill fish and scooping them up. As a result, the government banned fishing from Guddu Barrage to Sukkur.", "title": "Modern issues" }, { "paragraph_id": 40, "text": "The Indus is second among a group of ten rivers responsible for about 90% of all the plastic that reaches the oceans. The Yangtze is the only river contributing more plastic.", "title": "Modern issues" }, { "paragraph_id": 41, "text": "Frequently, Indus river is prone to moderate to severe flooding. In July 2010, following abnormally heavy monsoon rains, the Indus River rose above its banks and started flooding. The rain continued for the next two months, devastating large areas of Pakistan. In Sindh, the Indus burst its banks near Sukkur on 8 August, submerging the village of Mor Khan Jatoi. In early August, the heaviest flooding moved southward along the Indus River from severely affected northern regions toward western Punjab, where at least 1,400,000 acres (570,000 ha) of cropland was destroyed, and the southern province of Sindh. As of September 2010, over two thousand people had died and over a million homes had been destroyed since the flooding began.", "title": "Modern issues" }, { "paragraph_id": 42, "text": "The 2011 Sindh floods began during the Pakistani monsoon season in mid-August 2011, resulting from heavy monsoon rains in Sindh, eastern Balochistan, and southern Punjab. The floods caused considerable damage; an estimated 434 civilians were killed, with 5.3 million people and 1,524,773 homes affected. Sindh is a fertile region and often called the \"breadbasket\" of the country; the damage and toll of the floods on the local agrarian economy was said to be extensive. At least 1.7 million acres (690,000 ha; 2,700 sq mi) of arable land were inundated. The flooding followed the previous year's floods, which devastated a large part of the country. Unprecedented torrential monsoon rains caused severe flooding in 16 districts of Sindh.", "title": "Modern issues" }, { "paragraph_id": 43, "text": "In Pakistan currently there are six barrages on the Indus: Guddu Barrage, Sukkur Barrage, Kotri Barrage (also called Ghulam Muhammad barrage), Taunsa Barrage, Chashma Barrage and Jinnah Barrage. Another new barrage called \"Sindh Barrage\" is planned as a terminal barrage on the Indus River. There are some bridges on River Indus, such as Dadu Moro Bridge, Larkana Khairpur Indus River Bridge, Thatta-Sujawal bridge, Jhirk-Mula Katiar bridge and recently planned Kandhkot-Ghotki bridge.", "title": "Barrages, bridges, levees and dams" }, { "paragraph_id": 44, "text": "The entire left bank of Indus river in Sind province is protected from river flooding by constructing around 600 km long levees. The right bank side is also leveed from Guddu barrage to Lake Manchar. In response to the levees construction, the river has been aggrading rapidly over the last 20 years leading to breaches upstream of barrages and inundation of large areas.", "title": "Barrages, bridges, levees and dams" }, { "paragraph_id": 45, "text": "Tarbela Dam in Pakistan is constructed on the Indus River, while the controversial Kalabagh dam is also being constructed on Indus river. Pakistan is also building Munda Dam.", "title": "Barrages, bridges, levees and dams" }, { "paragraph_id": 46, "text": "Many Buddhist monasteries in Ladakh, Indus-Sarasvati Valley Civilisation sites along the banks of Indus & Sarasvati River (Ghaggar-Hakra River) and in Indus Sagar Doab, Indus River Delta, various dams such as Baglihar Dam, Sindhu Darshan Festival held every year at Leh, Sindhu Pushkaram festival held every 12 years at confluence of Indus & Zanskar River at Nimoo once every 12 years for 12 days starting from when Jupiter enter into Kumbha rasi (Aquarius), etc are tourism opportunities.", "title": "Tourism" } ]
The Indus is a transboundary river of Asia and a trans-Himalayan river of South and Central Asia. The 3,120 km (1,940 mi) river rises in mountain springs northeast of Mount Kailash in Western Tibet, flows northwest through the disputed region of Kashmir, bends sharply to the left after the Nanga Parbat massif, and flows south-by-southwest through Pakistan, before emptying into the Arabian Sea near the port city of Karachi. The river has a total drainage area of circa 1,120,000 km2 (430,000 sq mi). Its estimated annual flow is around 243 km3 (58 cu mi), making it one of the 50 largest rivers in the world in terms of average annual flow. Its left-bank tributary in Ladakh is the Zanskar River, and its left-bank tributary in the plains is the Panjnad River which is formed by the successive confluences of the five Punjab rivers, namely the Chenab, Jhelum, Ravi, Beas, and Sutlej rivers. Its principal right-bank tributaries are the Shyok, Gilgit, Kabul, Kurram, and Gomal rivers. Beginning in a mountain spring and fed with glaciers and rivers in the Himalayan, Karakoram, and Hindu Kush ranges, the river supports the ecosystems of temperate forests, plains, and arid countryside. The northern part of the Indus Valley, with its tributaries, forms the Punjab region of South Asia, while the lower course of the river ends in a large delta in the southern Sindh province of Pakistan. The river has historically been important to many cultures of the region. The 3rd millennium BCE saw the rise of Indus Valley Civilisation, a major urban civilization of the Bronze Age. During the 2nd millennium BCE, the Punjab region was mentioned in the Rigveda hymns as Sapta Sindhu and in the Avesta religious texts as Saptha Hindu. Early historical kingdoms that arose in the Indus Valley include Gandhāra, and the Ror dynasty of Sauvīra. The Indus River came into the knowledge of the Western world early in the classical period, when King Darius of Persia sent his Greek subject Scylax of Caryanda to explore the river, c. 515 BCE.
2001-11-11T15:58:16Z
2023-12-31T18:04:13Z
[ "Template:Div col end", "Template:Notelist", "Template:Cite encyclopedia", "Template:Cite journal", "Template:Webarchive", "Template:Pp-pc", "Template:Citation needed", "Template:China Rivers", "Template:Div end", "Template:Cite EB1911", "Template:Lang-ur", "Template:As of", "Template:Main", "Template:Efn", "Template:Div col", "Template:Redirect-multi", "Template:Circa", "Template:Use dmy dates", "Template:Use British English", "Template:Convert", "Template:Wide image", "Template:Cite news", "Template:Citation", "Template:Navboxes", "Template:Short description", "Template:Infobox river", "Template:Cvt", "Template:Sfn", "Template:Reflist", "Template:ISBN", "Template:External links", "Template:Sister project links", "Template:For", "Template:Respell", "Template:Lang-hi", "Template:Cite book", "Template:Cite web", "Template:URL", "Template:Asia topic", "Template:Redirect", "Template:IPAc-en", "Template:OSM", "Template:India Rivers", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Indus_River
15,491
Integer factorization
Can integer factorization be solved in polynomial time on a classical computer? In number theory, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem. A prime factorization algorithm typically involves testing whether each factor is prime after each step. When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure. By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors. Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if n = 171 × p × q where p < q are very large primes, trial division will quickly produce the factors 3 and 19 but will take p divisions to find the next factor. As a contrasting example, if n is the product of the primes 13729, 1372933, and 18848997161, where 13729 × 1372933 = 18848997157, Fermat's factorization method will begin with ⌈ n ⌉ = 18848997159 {\displaystyle \left\lceil {\sqrt {n}}\right\rceil =18848997159} which immediately yields b = a 2 − n = 4 = 2 {\textstyle b={\sqrt {a^{2}-n}}={\sqrt {4}}=2} and hence the factors a − b = 18848997157 and a + b = 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of ⌈ 18848997157 ⌉ = 137292 {\textstyle \left\lceil {\sqrt {18848997157}}\,\right\rceil =137292} for a is a factor of 10 from 1372933. Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those semiprimes whose factors are of similar size. For this reason, these are the integers used in cryptographic applications. In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240-digit (795-bit) number (RSA-240) utilizing approximately 900 core-years of computing power. The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long. The largest such semiprime yet factored was RSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines. No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a b-bit number n in time O(b) for some constant k. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P. The problem is clearly in class NP, but it is generally suspected that it is not NP-complete, though this has not been proven. There are published algorithms that are faster than O((1 + ε)) for all positive ε, that is, sub-exponential. As of 2022, the algorithm with best theoretical asymptotic running time is the general number field sieve (GNFS), first published in 1993, running on a b-bit number n in time: For current computers, GNFS is the best published algorithm for large n (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. This will have significant implications for cryptography if quantum computation becomes scalable. Shor's algorithm takes only O(b) time and O(b) space on b-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide seven qubits. It is not known exactly which complexity classes contain the decision version of the integer factorization problem (that is: does n have a factor smaller than k besides 1?). It is known to be in both NP and co-NP, meaning that both "yes" and "no" answers can be verified in polynomial time. An answer of "yes" can be certified by exhibiting a factorization n = d(n/d) with d ≤ k. An answer of "no" can be certified by exhibiting the factorization of n into distinct primes, all larger than k; one can verify their primality using the AKS primality test, and then multiply them to obtain n. The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and co-UP. It is known to be in BQP because of Shor's algorithm. The problem is suspected to be outside all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. If it could be proved to be either NP-complete or co-NP-complete, this would imply NP = co-NP, a very surprising result, and therefore integer factorization is widely suspected to be outside both these classes. In contrast, the decision problem "Is n a composite number?" (or equivalently: "Is n a prime number?") appears to be much easier than the problem of specifying factors of n. The composite/prime problem can be solved in polynomial time (in the number b of digits of n) with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with. A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms. An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors. For example, naive trial division is a Category 1 algorithm. A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm, has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method. In number theory, there are many integer factoring algorithms that heuristically have expected running time in little-o and L-notation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr, Seysen, and Lenstra, which they proved only assuming the unproved Generalized Riemann Hypothesis (GRH). The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance to have expected running time L n [ 1 2 , 1 + o ( 1 ) ] {\displaystyle L_{n}\left[{\tfrac {1}{2}},1+o(1)\right]} by replacing the GRH assumption with the use of multipliers. The algorithm uses the class group of positive binary quadratic forms of discriminant Δ denoted by GΔ. GΔ is the set of triples of integers (a, b, c) in which those integers are relative prime. Given an integer n that will be factored, where n is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant Δ is chosen as a multiple of n, Δ = −dn, where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in GΔ. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result. Denote by PΔ the set of all primes q with Kronecker symbol ( Δ q ) = 1 {\displaystyle \left({\tfrac {\Delta }{q}}\right)=1} . By constructing a set of generators of GΔ and prime forms fq of GΔ with q in PΔ a sequence of relations between the set of generators and fq are produced. The size of q can be bounded by c0(log|Δ|) for some constant c0. The relation that will be used is a relation between the product of powers that is equal to the neutral element of GΔ. These relations will be used to construct a so-called ambiguous form of GΔ, which is an element of GΔ of order dividing 2. By calculating the corresponding factorization of Δ and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps: Let n be the number to be factored. To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test. The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most L n [ 1 2 , 1 + o ( 1 ) ] {\displaystyle L_{n}\left[{\tfrac {1}{2}},1+o(1)\right]} .
[ { "paragraph_id": 0, "text": "Can integer factorization be solved in polynomial time on a classical computer?", "title": "" }, { "paragraph_id": 1, "text": "In number theory, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem. A prime factorization algorithm typically involves testing whether each factor is prime after each step.", "title": "" }, { "paragraph_id": 2, "text": "When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing.", "title": "" }, { "paragraph_id": 3, "text": "Not all numbers of a given length are equally hard to factor. The hardest instances of these problems (for currently known techniques) are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size (but not too close, for example, to avoid efficient factorization by Fermat's factorization method), even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically.", "title": "" }, { "paragraph_id": 4, "text": "Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure.", "title": "" }, { "paragraph_id": 5, "text": "By the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. (By convention, 1 is the empty product.) Testing whether the integer is prime can be done in polynomial time, for example, by the AKS primality test. If composite, however, the polynomial time tests give no insight into how to obtain the factors.", "title": "Prime decomposition" }, { "paragraph_id": 6, "text": "Given a general algorithm for integer factorization, any integer can be factored into its constituent prime factors by repeated application of this algorithm. The situation is more complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if n = 171 × p × q where p < q are very large primes, trial division will quickly produce the factors 3 and 19 but will take p divisions to find the next factor. As a contrasting example, if n is the product of the primes 13729, 1372933, and 18848997161, where 13729 × 1372933 = 18848997157, Fermat's factorization method will begin with ⌈ n ⌉ = 18848997159 {\\displaystyle \\left\\lceil {\\sqrt {n}}\\right\\rceil =18848997159} which immediately yields b = a 2 − n = 4 = 2 {\\textstyle b={\\sqrt {a^{2}-n}}={\\sqrt {4}}=2} and hence the factors a − b = 18848997157 and a + b = 18848997161. While these are easily recognized as composite and prime respectively, Fermat's method will take much longer to factor the composite number because the starting value of ⌈ 18848997157 ⌉ = 137292 {\\textstyle \\left\\lceil {\\sqrt {18848997157}}\\,\\right\\rceil =137292} for a is a factor of 10 from 1372933.", "title": "Prime decomposition" }, { "paragraph_id": 7, "text": "Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those semiprimes whose factors are of similar size. For this reason, these are the integers used in cryptographic applications.", "title": "Current state of the art" }, { "paragraph_id": 8, "text": "In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, Emmanuel Thomé and Paul Zimmermann factored a 240-digit (795-bit) number (RSA-240) utilizing approximately 900 core-years of computing power. The researchers estimated that a 1024-bit RSA modulus would take about 500 times as long.", "title": "Current state of the art" }, { "paragraph_id": 9, "text": "The largest such semiprime yet factored was RSA-250, an 829-bit number with 250 decimal digits, in February 2020. The total computation time was roughly 2700 core-years of computing using Intel Xeon Gold 6130 at 2.1 GHz. Like all recent factorization records, this factorization was completed with a highly optimized implementation of the general number field sieve run on hundreds of machines.", "title": "Current state of the art" }, { "paragraph_id": 10, "text": "No algorithm has been published that can factor all integers in polynomial time, that is, that can factor a b-bit number n in time O(b) for some constant k. Neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P. The problem is clearly in class NP, but it is generally suspected that it is not NP-complete, though this has not been proven.", "title": "Current state of the art" }, { "paragraph_id": 11, "text": "There are published algorithms that are faster than O((1 + ε)) for all positive ε, that is, sub-exponential. As of 2022, the algorithm with best theoretical asymptotic running time is the general number field sieve (GNFS), first published in 1993, running on a b-bit number n in time:", "title": "Current state of the art" }, { "paragraph_id": 12, "text": "For current computers, GNFS is the best published algorithm for large n (more than about 400 bits). For a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time. This will have significant implications for cryptography if quantum computation becomes scalable. Shor's algorithm takes only O(b) time and O(b) space on b-bit number inputs. In 2001, Shor's algorithm was implemented for the first time, by using NMR techniques on molecules that provide seven qubits.", "title": "Current state of the art" }, { "paragraph_id": 13, "text": "It is not known exactly which complexity classes contain the decision version of the integer factorization problem (that is: does n have a factor smaller than k besides 1?). It is known to be in both NP and co-NP, meaning that both \"yes\" and \"no\" answers can be verified in polynomial time. An answer of \"yes\" can be certified by exhibiting a factorization n = d(n/d) with d ≤ k. An answer of \"no\" can be certified by exhibiting the factorization of n into distinct primes, all larger than k; one can verify their primality using the AKS primality test, and then multiply them to obtain n. The fundamental theorem of arithmetic guarantees that there is only one possible string of increasing primes that will be accepted, which shows that the problem is in both UP and co-UP. It is known to be in BQP because of Shor's algorithm.", "title": "Current state of the art" }, { "paragraph_id": 14, "text": "The problem is suspected to be outside all three of the complexity classes P, NP-complete, and co-NP-complete. It is therefore a candidate for the NP-intermediate complexity class. If it could be proved to be either NP-complete or co-NP-complete, this would imply NP = co-NP, a very surprising result, and therefore integer factorization is widely suspected to be outside both these classes.", "title": "Current state of the art" }, { "paragraph_id": 15, "text": "In contrast, the decision problem \"Is n a composite number?\" (or equivalently: \"Is n a prime number?\") appears to be much easier than the problem of specifying factors of n. The composite/prime problem can be solved in polynomial time (in the number b of digits of n) with the AKS primality test. In addition, there are several probabilistic algorithms that can test primality very quickly in practice if one is willing to accept a vanishingly small possibility of error. The ease of primality testing is a crucial part of the RSA algorithm, as it is necessary to find large prime numbers to start with.", "title": "Current state of the art" }, { "paragraph_id": 16, "text": "A special-purpose factoring algorithm's running time depends on the properties of the number to be factored or on one of its unknown factors: size, special form, etc. The parameters which determine the running time vary among algorithms.", "title": "Factoring algorithms" }, { "paragraph_id": 17, "text": "An important subclass of special-purpose factoring algorithms is the Category 1 or First Category algorithms, whose running time depends on the size of smallest prime factor. Given an integer of unknown form, these methods are usually applied before general-purpose methods to remove small factors. For example, naive trial division is a Category 1 algorithm.", "title": "Factoring algorithms" }, { "paragraph_id": 18, "text": "A general-purpose factoring algorithm, also known as a Category 2, Second Category, or Kraitchik family algorithm, has a running time which depends solely on the size of the integer to be factored. This is the type of algorithm used to factor RSA numbers. Most general-purpose factoring algorithms are based on the congruence of squares method.", "title": "Factoring algorithms" }, { "paragraph_id": 19, "text": "In number theory, there are many integer factoring algorithms that heuristically have expected running time", "title": "Heuristic running time" }, { "paragraph_id": 20, "text": "in little-o and L-notation. Some examples of those algorithms are the elliptic curve method and the quadratic sieve. Another such algorithm is the class group relations method proposed by Schnorr, Seysen, and Lenstra, which they proved only assuming the unproved Generalized Riemann Hypothesis (GRH).", "title": "Heuristic running time" }, { "paragraph_id": 21, "text": "The Schnorr–Seysen–Lenstra probabilistic algorithm has been rigorously proven by Lenstra and Pomerance to have expected running time L n [ 1 2 , 1 + o ( 1 ) ] {\\displaystyle L_{n}\\left[{\\tfrac {1}{2}},1+o(1)\\right]} by replacing the GRH assumption with the use of multipliers. The algorithm uses the class group of positive binary quadratic forms of discriminant Δ denoted by GΔ. GΔ is the set of triples of integers (a, b, c) in which those integers are relative prime.", "title": "Rigorous running time" }, { "paragraph_id": 22, "text": "Given an integer n that will be factored, where n is an odd positive integer greater than a certain constant. In this factoring algorithm the discriminant Δ is chosen as a multiple of n, Δ = −dn, where d is some positive multiplier. The algorithm expects that for one d there exist enough smooth forms in GΔ. Lenstra and Pomerance show that the choice of d can be restricted to a small set to guarantee the smoothness result.", "title": "Rigorous running time" }, { "paragraph_id": 23, "text": "Denote by PΔ the set of all primes q with Kronecker symbol ( Δ q ) = 1 {\\displaystyle \\left({\\tfrac {\\Delta }{q}}\\right)=1} . By constructing a set of generators of GΔ and prime forms fq of GΔ with q in PΔ a sequence of relations between the set of generators and fq are produced. The size of q can be bounded by c0(log|Δ|) for some constant c0.", "title": "Rigorous running time" }, { "paragraph_id": 24, "text": "The relation that will be used is a relation between the product of powers that is equal to the neutral element of GΔ. These relations will be used to construct a so-called ambiguous form of GΔ, which is an element of GΔ of order dividing 2. By calculating the corresponding factorization of Δ and by taking a gcd, this ambiguous form provides the complete prime factorization of n. This algorithm has these main steps:", "title": "Rigorous running time" }, { "paragraph_id": 25, "text": "Let n be the number to be factored.", "title": "Rigorous running time" }, { "paragraph_id": 26, "text": "To obtain an algorithm for factoring any positive integer, it is necessary to add a few steps to this algorithm such as trial division, and the Jacobi sum test.", "title": "Rigorous running time" }, { "paragraph_id": 27, "text": "The algorithm as stated is a probabilistic algorithm as it makes random choices. Its expected running time is at most L n [ 1 2 , 1 + o ( 1 ) ] {\\displaystyle L_{n}\\left[{\\tfrac {1}{2}},1+o(1)\\right]} .", "title": "Rigorous running time" } ]
In number theory, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors, in which case it is called a composite number, or it is not, in which case it is called a prime number. For example, 15 is a composite number because 15 = 3 · 5, but 7 is a prime number because it cannot be decomposed in this way. If one of the factors is composite, it can in turn be written as a product of smaller factors, for example 60 = 3 · 20 = 3 · (5 · 4). Continuing this process until every factor is prime is called prime factorization; the result is always unique up to the order of the factors by the prime factorization theorem. A prime factorization algorithm typically involves testing whether each factor is prime after each step. When the numbers are sufficiently large, no efficient non-quantum integer factorization algorithm is known. However, it has not been proven that such an algorithm does not exist. The presumed difficulty of this problem is important for the algorithms used in cryptography such as RSA public-key encryption and the RSA digital signature. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, and quantum computing. Not all numbers of a given length are equally hard to factor. The hardest instances of these problems are semiprimes, the product of two prime numbers. When they are both large, for instance more than two thousand bits long, randomly chosen, and about the same size, even the fastest prime factorization algorithms on the fastest computers can take enough time to make the search impractical; that is, as the number of digits of the integer being factored increases, the number of operations required to perform the factorization on any computer increases drastically. Many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure.
2002-01-09T11:18:16Z
2023-11-26T02:46:58Z
[ "Template:Redirect", "Template:Unsolved", "Template:See also", "Template:Cite web", "Template:ISBN", "Template:Reflist", "Template:Number theoretic algorithms", "Template:Authority control", "Template:Math", "Template:Nowrap", "Template:As of", "Template:Abs", "Template:Computational hardness assumptions", "Template:Short description", "Template:Citation", "Template:Cite journal", "Template:Cite book", "Template:Divisor classes" ]
https://en.wikipedia.org/wiki/Integer_factorization
15,492
Imperial units
The imperial system of units, imperial system or imperial units (also known as British Imperial or Exchequer Standards of 1826) is the system of units first defined in the British Weights and Measures Act 1824 and continued to be developed through a series of Weights and Measures Acts and amendments. The imperial system developed from earlier English units as did the related but differing system of customary units of the United States. The imperial units replaced the Winchester Standards, which were in effect from 1588 to 1825. The system came into official use across the British Empire in 1826. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but imperial units are still used alongside metric units in the United Kingdom and in some other parts of the former empire, notably Canada. The modern UK legislation defining the imperial system of units is given in the Weights and Measures Act 1985 (as amended). The Weights and Measures Act 1824 was initially scheduled to go into effect on 1 May 1825. The Weights and Measures Act 1825 pushed back the date to 1 January 1826. The 1824 Act allowed the continued use of pre-imperial units provided that they were customary, widely known, and clearly marked with imperial equivalents. Apothecaries' units are not mentioned in the acts of 1824 and 1825. At the time, apothecaries' weights and measures were regulated "in England, Wales, and Berwick-upon-Tweed" by the London College of Physicians, and in Ireland by the Dublin College of Physicians. In Scotland, apothecaries' units were unofficially regulated by the Edinburgh College of Physicians. The three colleges published, at infrequent intervals, pharmacopoeias, the London and Dublin editions having the force of law. Imperial apothecaries' measures, based on the imperial pint of 20 fluid ounces, were introduced by the publication of the London Pharmacopoeia of 1836, the Edinburgh Pharmacopoeia of 1839, and the Dublin Pharmacopoeia of 1850. The Medical Act 1858 transferred to The Crown the right to publish the official pharmacopoeia and to regulate apothecaries' weights and measures. Metric equivalents in this article usually assume the latest official definition. Before this date, the most precise measurement of the imperial Standard Yard was 0.914398415 metres. The Weights and Measures Act 1824 invalidated the various different gallons in use in the British Empire, declaring them to be replaced by the statute gallon (which became known as the imperial gallon), a unit close in volume to the ale gallon. The 1824 Act defined as the volume of a gallon to be that of 10 pounds (4.54 kg) of distilled water weighed in air with brass weights with the barometer standing at 30 inches of mercury (102 kPa) at a temperature of 62 °F (17 °C). The 1824 Act went on to give this volume as 277.274 cubic inches (4.54371 litres). The Weights and Measures Act 1963 refined this definition to be the volume of 10 pounds of distilled water of density 0.998859 g/mL weighed in air of density 0.001217 g/mL against weights of density 8.136 g/mL, which works out to 4.546092 L. The Weights and Measures Act 1985 defined a gallon to be exactly 4.54609 L (approximately 277.4194 cu in). These measurements were in use from 1826, when the new imperial gallon was defined. For pharmaceutical purposes, they were replaced by the metric system in the United Kingdom on 1 January 1971. In the US, though no longer recommended, the apothecaries' system is still used occasionally in medicine, especially in prescriptions for older medications. In the 19th and 20th centuries, the UK used three different systems for mass and weight. The distinction between mass and weight is not always clearly drawn. Strictly a pound is a unit of mass, but it is commonly referred to as a weight. When a distinction is necessary, the term pound-force may refer to a unit of force rather than mass. The troy pound (373.2417216 g) was made the primary unit of mass by the 1824 Act and its use was abolished in the UK on 1 January 1879, with only the troy ounce (31.1034768 g) and its decimal subdivisions retained. The Weights and Measures Act 1855 (18 & 19 Vict. c. 72) made the avoirdupois pound the primary unit of mass. In all the systems, the fundamental unit is the pound, and all other units are defined as fractions or multiples of it. The 1824 Act of Parliament defined the yard and pound by reference to the prototype standards, and it also defined the values of certain physical constants, to make provision for re-creation of the standards if they were to be damaged. For the yard, the length of a pendulum beating seconds at the latitude of Greenwich at Mean Sea Level in vacuo was defined as 39.01393 inches. For the pound, the mass of a cubic inch of distilled water at an atmospheric pressure of 30 inches of mercury and a temperature of 62° Fahrenheit was defined as 252.458 grains, with there being 7,000 grains per pound. Following the destruction of the original prototypes in the 1834 Houses of Parliament fire, it proved impossible to recreate the standards from these definitions, and a new Weights and Measures Act 1855 (18 & 19 Vict. c. 72) was passed which permitted the recreation of the prototypes from recognized secondary standards. Since the Weights and Measures Act 1985, British law defines base imperial units in terms of their metric equivalent. The metric system is routinely used in business and technology within the United Kingdom, with imperial units remaining in widespread use amongst the public. All UK roads use the imperial system except for weight limits, and newer height or width restriction signs give metric alongside imperial. Traders in the UK may accept requests from customers specified in imperial units, and scales which display in both unit systems are commonplace in the retail trade. Metric price signs may be accompanied by imperial price signs provided that the imperial signs are no larger and no more prominent than the metric ones. The United Kingdom completed its official partial transition to the metric system in 1995, with imperial units still legally mandated for certain applications such as draught beer and cider, and road-signs. Therefore, the speedometers on vehicles sold in the UK must be capable of displaying miles per hour. Even though the troy pound was outlawed in the UK in the Weights and Measures Act 1878, the troy ounce may still be used for the weights of precious stones and metals. The original railways (many built in the Victorian era) are a big user of imperial units, with distances officially measured in miles and yards or miles and chains, and also feet and inches, and speeds are in miles per hour. Some British people still use one or more imperial units in everyday life for distance (miles, yards, feet, and inches) and some types of volume measurement (especially milk and beer in pints; rarely for canned or bottled soft drinks, or petrol). As of February 2021, many British people also still use imperial units in everyday life for body weight (stones and pounds for adults, pounds and ounces for babies). Government documents aimed at the public may give body weight and height in imperial units as well as in metric. A survey in 2015 found that many people did not know their body weight or height in both systems. People under the age of 40 preferred the metric system but people aged 40 and over preferred the imperial system. As in other English-speaking countries, including Australia, Canada and the United States, the height of horses is usually measured in hands, standardised to 4 inches (102 mm). Fuel consumption for vehicles is commonly stated in miles per gallon (mpg), though official figures always include litres per 100 km equivalents and fuel is sold in litres. When sold draught in licensed premises, beer and cider must be sold in pints, half-pints or third-pints. Cow's milk is available in both litre- and pint-based containers in supermarkets and shops. Areas of land associated with farming, forestry and real estate are commonly advertised in acres and square feet but, for contracts and land registration purposes, the units are always hectares and square metres. Office space and industrial units are usually advertised in square feet. Steel pipe sizes are sold in increments of inches, while copper pipe is sold in increments of millimetres. Road bicycles have their frames measured in centimetres, while off-road bicycles have their frames measured in inches. Display sizes for screens on television sets and computer monitors are always diagonally measured in inches. Food sold by length or width, e.g. pizzas or sandwiches, is generally sold in inches. Clothing is usually sized in inches, with the metric equivalent often shown as a small supplementary indicator. Gas is usually measured by the cubic foot or cubic metre, but is billed like electricity by the kilowatt hour. Pre-packaged products can show both metric and imperial measures, and it is also common to see imperial pack sizes with metric only labels, e.g. a 1 lb (454 g) tin of Lyle's Golden Syrup is always labelled 454 g with no imperial indicator. Similarly most jars of jam and packs of sausages are labelled 454 g with no imperial indicator. India began converting to the metric system from the imperial system between 1955 and 1962. The metric system in weights and measures was adopted by the Indian Parliament in December 1956 with the Standards of Weights and Measures Act, which took effect beginning 1 October 1958. By 1962, metric units became "mandatory and exclusive." Today all official measurements are made in the metric system. In common usage some older Indians may still refer to imperial units. Some measurements, such as the heights of mountains, are still recorded in feet. Tyre rim diameters are still measured in inches, as used worldwide. Industries like the construction and the real estate industry still use both the metric and the imperial system though it is more common for sizes of homes to be given in square feet and land in acres. In Standard Indian English, as in Australian, Singaporean, and British English, metric units such as the litre, metre, and tonne utilise the traditional spellings brought over from French, which differ from those used in the United States and the Philippines. The imperial long ton is invariably spelt with one 'n'. Hong Kong has three main systems of units of measurement in current use: In 1976 the Hong Kong Government started the conversion to the metric system, and as of 2012 measurements for government purposes, such as road signs, are almost always in metric units. All three systems are officially permitted for trade, and in the wider society a mixture of all three systems prevails. The Chinese system's most commonly used units for length are 里 (lei), 丈 (zoeng), 尺 (cek), 寸 (cyun), 分 (fan) in descending scale order. These units are now rarely used in daily life, the imperial and metric systems being preferred. The imperial equivalents are written with the same basic Chinese characters as the Chinese system. In order to distinguish between the units of the two systems, the units can be prefixed with "Ying" (英, jing) for the imperial system and "Wa" (華, waa) for the Chinese system. In writing, derived characters are often used, with an additional 口 (mouth) radical to the left of the original Chinese character, for writing imperial units. The most commonly used units are the mile or "li" (哩, li), the yard or "ma" (碼, maa), the foot or "chek" (呎, cek), and the inch or "tsun" (吋, cyun). The traditional measure of flat area is the square foot (方呎, 平方呎, fong cek, ping fong cek) of the imperial system, which is still in common use for real estate purposes. The measurement of agricultural plots and fields is traditionally conducted in 畝 (mau) of the Chinese system. For the measurement of volume, Hong Kong officially uses the metric system, though the gallon (加侖, gaa leon) is also occasionally used. During the 1970s, the metric system and SI units were introduced in Canada to replace the imperial system. Within the government, efforts to implement the metric system were extensive; almost any agency, institution, or function provided by the government uses SI units exclusively. Imperial units were eliminated from all public road signs and both systems of measurement will still be found on privately owned signs, such as the height warnings at the entrance of a parkade. In the 1980s, momentum to fully convert to the metric system stalled when the government of Brian Mulroney was elected. There was heavy opposition to metrication and as a compromise the government maintains legal definitions for and allows use of imperial units as long as metric units are shown as well. The law requires that measured products (such as fuel and meat) be priced in metric units and an imperial price can be shown if a metric price is present. There tends to be leniency in regards to fruits and vegetables being priced in imperial units only. Environment Canada still offers an imperial unit option beside metric units, even though weather is typically measured and reported in metric units in the Canadian media. Some radio stations near the United States border (such as CIMX and CIDR) primarily use imperial units to report the weather. Railways in Canada also continue to use imperial units. Imperial units are still used in ordinary conversation. Today, Canadians typically use a mix of metric and imperial measurements in their daily lives. The use of the metric and imperial systems varies by age. The older generation mostly uses the imperial system, while the younger generation more often uses the metric system. Quebec has implemented metrication more fully. Newborns are measured in SI at hospitals, but the birth weight and length is also announced to family and friends in imperial units. Drivers' licences use SI units, though many English-speaking Canadians give their height and weight in imperial. In livestock auction markets, cattle are sold in dollars per hundredweight (short), whereas hogs are sold in dollars per hundred kilograms. Imperial units still dominate in recipes, construction, house renovation and gardening. Land is now surveyed and registered in metric units whilst initial surveys used imperial units. For example, partitioning of farm land on the prairies in the late 19th and early 20th centuries was done in imperial units; this accounts for imperial units of distance and area retaining wide use in the Prairie Provinces. In English-speaking Canada commercial and residential spaces are mostly (but not exclusively) constructed using square feet, while in French-speaking Quebec commercial and residential spaces are constructed in metres and advertised using both square metres and square feet as equivalents. Carpet or flooring tile is purchased by the square foot, but less frequently also in square metres. Motor-vehicle fuel consumption is reported in both litres per 100 km and statute miles per imperial gallon, leading to the erroneous impression that Canadian vehicles are 20% more fuel-efficient than their apparently identical American counterparts for which fuel economy is reported in statute miles per US gallon (neither country specifies which gallon is used). Canadian railways maintain exclusive use of imperial measurements to describe train length (feet), train height (feet), capacity (tons), speed (mph), and trackage (miles). Imperial units also retain common use in firearms and ammunition. Imperial measures are still used in the description of cartridge types, even when the cartridge is of relatively recent invention (e.g., .204 Ruger, .17 HMR, where the calibre is expressed in decimal fractions of an inch). Ammunition that is already classified in metric is still kept metric (e.g., 9×19mm). In the manufacture of ammunition, bullet and powder weights are expressed in terms of grains for both metric and imperial cartridges. In keeping with the international standard, air navigation is based on nautical units, e.g., the nautical mile, which is neither imperial nor metric, and altitude is measured in imperial feet. While metrication in Australia has largely ended the official use of imperial units, for particular measurements, international use of imperial units is still followed. As a result of cultural transmission of British and American English in Australia, there has also been noted to be a cause for residual use of imperial units of measure. New Zealand introduced the metric system on 15 December 1976. Aviation was exempt, with altitude and airport elevation continuing to be measured in feet whilst navigation is done in nautical miles; all other aspects (fuel quantity, aircraft weight, runway length, etc.) use metric units. Screen sizes for devices such as televisions, monitors and phones, and wheel rim sizes for vehicles, are stated in inches, as is the convention in the rest of the world - and a 1992 study found a continued use of imperial units for birth weight and human height alongside metric units. Ireland has officially changed over to the metric system since entering the European Union, with distances on new road signs being metric since 1997 and speed limits being metric since 2005. The imperial system remains in limited use – for sales of beer in pubs (traditionally sold by the pint). All other goods are required by law to be sold in metric units with traditional quantities being retained for goods like butter and sausages, which are sold in 454 grams (1 lb) packaging. The majority of cars sold pre-2005 feature speedometers with miles per hour as the primary unit, but with a kilometres per hour display. Often signs such as those for bridge height can display both metric and imperial units. Imperial measurements continue to be used colloquially by the general population especially with height and distance measurements such as feet, inches, and acres as well as for weight with pounds and stones still in common use among people of all ages. Measurements such as yards have fallen out of favour with younger generations. Ireland's railways still use imperial measurements for distances and speed signage. Property is usually listed in square feet as well as metres also. Horse racing in Ireland still continues to use stones, pounds, miles and furlongs as measurements. Imperial measurements remain in general use in the Bahamas. Legally, both the imperial and metric systems are recognised by the Weights and Measures Act 2006. According to the CIA, in June 2009, Myanmar was one of three countries that had not adopted the SI metric system as their official system of weights and measures. Metrication efforts began in 2011. The Burmese government set a goal to metricate by 2019, which was not met, with the help of the German National Metrology Institute. Some imperial measurements remain in limited use in Malaysia, the Philippines, Sri Lanka and South Africa. Measurements in feet and inches, especially for a person's height, are frequently encountered in conversation and non-governmental publications. Prior to metrication, it was a common practice in Malaysia for people to refer to unnamed locations and small settlements along major roads by referring to how many miles the said locations were from the nearest major town. In some cases, these eventually became the official names of the locations; in other cases, such names have been largely or completely superseded by new names. An example of the former is Batu 32 (literally "Mile 32" in Malay), which refers to the area surrounding the intersection between Federal Route 22 (the Tamparuli-Sandakan highway) and Federal Route 13 (the Sandakan-Tawau highway). The area is so named because it is 32 miles west of Sandakan, the nearest major town. Petrol is still sold by the imperial gallon in Anguilla, Antigua and Barbuda, Belize, Myanmar, the Cayman Islands, Dominica, Grenada, Montserrat, St Kitts and Nevis and St. Vincent and the Grenadines. The United Arab Emirates Cabinet in 2009 issued the Decree No. (270 / 3) specifying that, from 1 January 2010, the new unit sale price for petrol will be the litre and not the gallon, which was in line with the UAE Cabinet Decision No. 31 of 2006 on the national system of measurement, which mandates the use of International System of units as a basis for the legal units of measurement in the country. Sierra Leone switched to selling fuel by the litre in May 2011. In October 2011, the Antigua and Barbuda government announced the re-launch of the Metrication Programme in accordance with the Metrology Act 2007, which established the International System of Units as the legal system of units. The Antigua and Barbuda government has committed to a full conversion from the imperial system by the first quarter of 2015.
[ { "paragraph_id": 0, "text": "The imperial system of units, imperial system or imperial units (also known as British Imperial or Exchequer Standards of 1826) is the system of units first defined in the British Weights and Measures Act 1824 and continued to be developed through a series of Weights and Measures Acts and amendments.", "title": "" }, { "paragraph_id": 1, "text": "The imperial system developed from earlier English units as did the related but differing system of customary units of the United States. The imperial units replaced the Winchester Standards, which were in effect from 1588 to 1825. The system came into official use across the British Empire in 1826.", "title": "" }, { "paragraph_id": 2, "text": "By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but imperial units are still used alongside metric units in the United Kingdom and in some other parts of the former empire, notably Canada.", "title": "" }, { "paragraph_id": 3, "text": "The modern UK legislation defining the imperial system of units is given in the Weights and Measures Act 1985 (as amended).", "title": "" }, { "paragraph_id": 4, "text": "The Weights and Measures Act 1824 was initially scheduled to go into effect on 1 May 1825. The Weights and Measures Act 1825 pushed back the date to 1 January 1826. The 1824 Act allowed the continued use of pre-imperial units provided that they were customary, widely known, and clearly marked with imperial equivalents.", "title": "Implementation" }, { "paragraph_id": 5, "text": "Apothecaries' units are not mentioned in the acts of 1824 and 1825. At the time, apothecaries' weights and measures were regulated \"in England, Wales, and Berwick-upon-Tweed\" by the London College of Physicians, and in Ireland by the Dublin College of Physicians. In Scotland, apothecaries' units were unofficially regulated by the Edinburgh College of Physicians. The three colleges published, at infrequent intervals, pharmacopoeias, the London and Dublin editions having the force of law.", "title": "Implementation" }, { "paragraph_id": 6, "text": "Imperial apothecaries' measures, based on the imperial pint of 20 fluid ounces, were introduced by the publication of the London Pharmacopoeia of 1836, the Edinburgh Pharmacopoeia of 1839, and the Dublin Pharmacopoeia of 1850. The Medical Act 1858 transferred to The Crown the right to publish the official pharmacopoeia and to regulate apothecaries' weights and measures.", "title": "Implementation" }, { "paragraph_id": 7, "text": "Metric equivalents in this article usually assume the latest official definition. Before this date, the most precise measurement of the imperial Standard Yard was 0.914398415 metres.", "title": "Units" }, { "paragraph_id": 8, "text": "The Weights and Measures Act 1824 invalidated the various different gallons in use in the British Empire, declaring them to be replaced by the statute gallon (which became known as the imperial gallon), a unit close in volume to the ale gallon. The 1824 Act defined as the volume of a gallon to be that of 10 pounds (4.54 kg) of distilled water weighed in air with brass weights with the barometer standing at 30 inches of mercury (102 kPa) at a temperature of 62 °F (17 °C). The 1824 Act went on to give this volume as 277.274 cubic inches (4.54371 litres). The Weights and Measures Act 1963 refined this definition to be the volume of 10 pounds of distilled water of density 0.998859 g/mL weighed in air of density 0.001217 g/mL against weights of density 8.136 g/mL, which works out to 4.546092 L. The Weights and Measures Act 1985 defined a gallon to be exactly 4.54609 L (approximately 277.4194 cu in).", "title": "Units" }, { "paragraph_id": 9, "text": "These measurements were in use from 1826, when the new imperial gallon was defined. For pharmaceutical purposes, they were replaced by the metric system in the United Kingdom on 1 January 1971. In the US, though no longer recommended, the apothecaries' system is still used occasionally in medicine, especially in prescriptions for older medications.", "title": "Units" }, { "paragraph_id": 10, "text": "In the 19th and 20th centuries, the UK used three different systems for mass and weight.", "title": "Units" }, { "paragraph_id": 11, "text": "The distinction between mass and weight is not always clearly drawn. Strictly a pound is a unit of mass, but it is commonly referred to as a weight. When a distinction is necessary, the term pound-force may refer to a unit of force rather than mass. The troy pound (373.2417216 g) was made the primary unit of mass by the 1824 Act and its use was abolished in the UK on 1 January 1879, with only the troy ounce (31.1034768 g) and its decimal subdivisions retained. The Weights and Measures Act 1855 (18 & 19 Vict. c. 72) made the avoirdupois pound the primary unit of mass. In all the systems, the fundamental unit is the pound, and all other units are defined as fractions or multiples of it.", "title": "Units" }, { "paragraph_id": 12, "text": "The 1824 Act of Parliament defined the yard and pound by reference to the prototype standards, and it also defined the values of certain physical constants, to make provision for re-creation of the standards if they were to be damaged. For the yard, the length of a pendulum beating seconds at the latitude of Greenwich at Mean Sea Level in vacuo was defined as 39.01393 inches. For the pound, the mass of a cubic inch of distilled water at an atmospheric pressure of 30 inches of mercury and a temperature of 62° Fahrenheit was defined as 252.458 grains, with there being 7,000 grains per pound.", "title": "Natural equivalents" }, { "paragraph_id": 13, "text": "Following the destruction of the original prototypes in the 1834 Houses of Parliament fire, it proved impossible to recreate the standards from these definitions, and a new Weights and Measures Act 1855 (18 & 19 Vict. c. 72) was passed which permitted the recreation of the prototypes from recognized secondary standards.", "title": "Natural equivalents" }, { "paragraph_id": 14, "text": "Since the Weights and Measures Act 1985, British law defines base imperial units in terms of their metric equivalent. The metric system is routinely used in business and technology within the United Kingdom, with imperial units remaining in widespread use amongst the public. All UK roads use the imperial system except for weight limits, and newer height or width restriction signs give metric alongside imperial.", "title": "Current use" }, { "paragraph_id": 15, "text": "Traders in the UK may accept requests from customers specified in imperial units, and scales which display in both unit systems are commonplace in the retail trade. Metric price signs may be accompanied by imperial price signs provided that the imperial signs are no larger and no more prominent than the metric ones.", "title": "Current use" }, { "paragraph_id": 16, "text": "The United Kingdom completed its official partial transition to the metric system in 1995, with imperial units still legally mandated for certain applications such as draught beer and cider, and road-signs. Therefore, the speedometers on vehicles sold in the UK must be capable of displaying miles per hour. Even though the troy pound was outlawed in the UK in the Weights and Measures Act 1878, the troy ounce may still be used for the weights of precious stones and metals. The original railways (many built in the Victorian era) are a big user of imperial units, with distances officially measured in miles and yards or miles and chains, and also feet and inches, and speeds are in miles per hour.", "title": "Current use" }, { "paragraph_id": 17, "text": "Some British people still use one or more imperial units in everyday life for distance (miles, yards, feet, and inches) and some types of volume measurement (especially milk and beer in pints; rarely for canned or bottled soft drinks, or petrol). As of February 2021, many British people also still use imperial units in everyday life for body weight (stones and pounds for adults, pounds and ounces for babies). Government documents aimed at the public may give body weight and height in imperial units as well as in metric. A survey in 2015 found that many people did not know their body weight or height in both systems. People under the age of 40 preferred the metric system but people aged 40 and over preferred the imperial system. As in other English-speaking countries, including Australia, Canada and the United States, the height of horses is usually measured in hands, standardised to 4 inches (102 mm). Fuel consumption for vehicles is commonly stated in miles per gallon (mpg), though official figures always include litres per 100 km equivalents and fuel is sold in litres. When sold draught in licensed premises, beer and cider must be sold in pints, half-pints or third-pints. Cow's milk is available in both litre- and pint-based containers in supermarkets and shops. Areas of land associated with farming, forestry and real estate are commonly advertised in acres and square feet but, for contracts and land registration purposes, the units are always hectares and square metres.", "title": "Current use" }, { "paragraph_id": 18, "text": "Office space and industrial units are usually advertised in square feet. Steel pipe sizes are sold in increments of inches, while copper pipe is sold in increments of millimetres. Road bicycles have their frames measured in centimetres, while off-road bicycles have their frames measured in inches. Display sizes for screens on television sets and computer monitors are always diagonally measured in inches. Food sold by length or width, e.g. pizzas or sandwiches, is generally sold in inches. Clothing is usually sized in inches, with the metric equivalent often shown as a small supplementary indicator. Gas is usually measured by the cubic foot or cubic metre, but is billed like electricity by the kilowatt hour.", "title": "Current use" }, { "paragraph_id": 19, "text": "Pre-packaged products can show both metric and imperial measures, and it is also common to see imperial pack sizes with metric only labels, e.g. a 1 lb (454 g) tin of Lyle's Golden Syrup is always labelled 454 g with no imperial indicator. Similarly most jars of jam and packs of sausages are labelled 454 g with no imperial indicator.", "title": "Current use" }, { "paragraph_id": 20, "text": "India began converting to the metric system from the imperial system between 1955 and 1962. The metric system in weights and measures was adopted by the Indian Parliament in December 1956 with the Standards of Weights and Measures Act, which took effect beginning 1 October 1958. By 1962, metric units became \"mandatory and exclusive.\"", "title": "Current use" }, { "paragraph_id": 21, "text": "Today all official measurements are made in the metric system. In common usage some older Indians may still refer to imperial units. Some measurements, such as the heights of mountains, are still recorded in feet. Tyre rim diameters are still measured in inches, as used worldwide. Industries like the construction and the real estate industry still use both the metric and the imperial system though it is more common for sizes of homes to be given in square feet and land in acres.", "title": "Current use" }, { "paragraph_id": 22, "text": "In Standard Indian English, as in Australian, Singaporean, and British English, metric units such as the litre, metre, and tonne utilise the traditional spellings brought over from French, which differ from those used in the United States and the Philippines. The imperial long ton is invariably spelt with one 'n'.", "title": "Current use" }, { "paragraph_id": 23, "text": "Hong Kong has three main systems of units of measurement in current use:", "title": "Current use" }, { "paragraph_id": 24, "text": "In 1976 the Hong Kong Government started the conversion to the metric system, and as of 2012 measurements for government purposes, such as road signs, are almost always in metric units. All three systems are officially permitted for trade, and in the wider society a mixture of all three systems prevails.", "title": "Current use" }, { "paragraph_id": 25, "text": "The Chinese system's most commonly used units for length are 里 (lei), 丈 (zoeng), 尺 (cek), 寸 (cyun), 分 (fan) in descending scale order. These units are now rarely used in daily life, the imperial and metric systems being preferred. The imperial equivalents are written with the same basic Chinese characters as the Chinese system. In order to distinguish between the units of the two systems, the units can be prefixed with \"Ying\" (英, jing) for the imperial system and \"Wa\" (華, waa) for the Chinese system. In writing, derived characters are often used, with an additional 口 (mouth) radical to the left of the original Chinese character, for writing imperial units. The most commonly used units are the mile or \"li\" (哩, li), the yard or \"ma\" (碼, maa), the foot or \"chek\" (呎, cek), and the inch or \"tsun\" (吋, cyun).", "title": "Current use" }, { "paragraph_id": 26, "text": "The traditional measure of flat area is the square foot (方呎, 平方呎, fong cek, ping fong cek) of the imperial system, which is still in common use for real estate purposes. The measurement of agricultural plots and fields is traditionally conducted in 畝 (mau) of the Chinese system.", "title": "Current use" }, { "paragraph_id": 27, "text": "For the measurement of volume, Hong Kong officially uses the metric system, though the gallon (加侖, gaa leon) is also occasionally used.", "title": "Current use" }, { "paragraph_id": 28, "text": "During the 1970s, the metric system and SI units were introduced in Canada to replace the imperial system. Within the government, efforts to implement the metric system were extensive; almost any agency, institution, or function provided by the government uses SI units exclusively. Imperial units were eliminated from all public road signs and both systems of measurement will still be found on privately owned signs, such as the height warnings at the entrance of a parkade. In the 1980s, momentum to fully convert to the metric system stalled when the government of Brian Mulroney was elected. There was heavy opposition to metrication and as a compromise the government maintains legal definitions for and allows use of imperial units as long as metric units are shown as well. The law requires that measured products (such as fuel and meat) be priced in metric units and an imperial price can be shown if a metric price is present. There tends to be leniency in regards to fruits and vegetables being priced in imperial units only. Environment Canada still offers an imperial unit option beside metric units, even though weather is typically measured and reported in metric units in the Canadian media. Some radio stations near the United States border (such as CIMX and CIDR) primarily use imperial units to report the weather. Railways in Canada also continue to use imperial units.", "title": "Current use" }, { "paragraph_id": 29, "text": "Imperial units are still used in ordinary conversation. Today, Canadians typically use a mix of metric and imperial measurements in their daily lives. The use of the metric and imperial systems varies by age. The older generation mostly uses the imperial system, while the younger generation more often uses the metric system. Quebec has implemented metrication more fully. Newborns are measured in SI at hospitals, but the birth weight and length is also announced to family and friends in imperial units. Drivers' licences use SI units, though many English-speaking Canadians give their height and weight in imperial. In livestock auction markets, cattle are sold in dollars per hundredweight (short), whereas hogs are sold in dollars per hundred kilograms. Imperial units still dominate in recipes, construction, house renovation and gardening. Land is now surveyed and registered in metric units whilst initial surveys used imperial units. For example, partitioning of farm land on the prairies in the late 19th and early 20th centuries was done in imperial units; this accounts for imperial units of distance and area retaining wide use in the Prairie Provinces. In English-speaking Canada commercial and residential spaces are mostly (but not exclusively) constructed using square feet, while in French-speaking Quebec commercial and residential spaces are constructed in metres and advertised using both square metres and square feet as equivalents. Carpet or flooring tile is purchased by the square foot, but less frequently also in square metres. Motor-vehicle fuel consumption is reported in both litres per 100 km and statute miles per imperial gallon, leading to the erroneous impression that Canadian vehicles are 20% more fuel-efficient than their apparently identical American counterparts for which fuel economy is reported in statute miles per US gallon (neither country specifies which gallon is used). Canadian railways maintain exclusive use of imperial measurements to describe train length (feet), train height (feet), capacity (tons), speed (mph), and trackage (miles).", "title": "Current use" }, { "paragraph_id": 30, "text": "Imperial units also retain common use in firearms and ammunition. Imperial measures are still used in the description of cartridge types, even when the cartridge is of relatively recent invention (e.g., .204 Ruger, .17 HMR, where the calibre is expressed in decimal fractions of an inch). Ammunition that is already classified in metric is still kept metric (e.g., 9×19mm). In the manufacture of ammunition, bullet and powder weights are expressed in terms of grains for both metric and imperial cartridges.", "title": "Current use" }, { "paragraph_id": 31, "text": "In keeping with the international standard, air navigation is based on nautical units, e.g., the nautical mile, which is neither imperial nor metric, and altitude is measured in imperial feet.", "title": "Current use" }, { "paragraph_id": 32, "text": "While metrication in Australia has largely ended the official use of imperial units, for particular measurements, international use of imperial units is still followed.", "title": "Current use" }, { "paragraph_id": 33, "text": "As a result of cultural transmission of British and American English in Australia, there has also been noted to be a cause for residual use of imperial units of measure.", "title": "Current use" }, { "paragraph_id": 34, "text": "New Zealand introduced the metric system on 15 December 1976. Aviation was exempt, with altitude and airport elevation continuing to be measured in feet whilst navigation is done in nautical miles; all other aspects (fuel quantity, aircraft weight, runway length, etc.) use metric units.", "title": "Current use" }, { "paragraph_id": 35, "text": "Screen sizes for devices such as televisions, monitors and phones, and wheel rim sizes for vehicles, are stated in inches, as is the convention in the rest of the world - and a 1992 study found a continued use of imperial units for birth weight and human height alongside metric units.", "title": "Current use" }, { "paragraph_id": 36, "text": "Ireland has officially changed over to the metric system since entering the European Union, with distances on new road signs being metric since 1997 and speed limits being metric since 2005. The imperial system remains in limited use – for sales of beer in pubs (traditionally sold by the pint). All other goods are required by law to be sold in metric units with traditional quantities being retained for goods like butter and sausages, which are sold in 454 grams (1 lb) packaging. The majority of cars sold pre-2005 feature speedometers with miles per hour as the primary unit, but with a kilometres per hour display. Often signs such as those for bridge height can display both metric and imperial units. Imperial measurements continue to be used colloquially by the general population especially with height and distance measurements such as feet, inches, and acres as well as for weight with pounds and stones still in common use among people of all ages. Measurements such as yards have fallen out of favour with younger generations. Ireland's railways still use imperial measurements for distances and speed signage. Property is usually listed in square feet as well as metres also.", "title": "Current use" }, { "paragraph_id": 37, "text": "Horse racing in Ireland still continues to use stones, pounds, miles and furlongs as measurements.", "title": "Current use" }, { "paragraph_id": 38, "text": "Imperial measurements remain in general use in the Bahamas.", "title": "Current use" }, { "paragraph_id": 39, "text": "Legally, both the imperial and metric systems are recognised by the Weights and Measures Act 2006.", "title": "Current use" }, { "paragraph_id": 40, "text": "According to the CIA, in June 2009, Myanmar was one of three countries that had not adopted the SI metric system as their official system of weights and measures. Metrication efforts began in 2011. The Burmese government set a goal to metricate by 2019, which was not met, with the help of the German National Metrology Institute.", "title": "Current use" }, { "paragraph_id": 41, "text": "Some imperial measurements remain in limited use in Malaysia, the Philippines, Sri Lanka and South Africa. Measurements in feet and inches, especially for a person's height, are frequently encountered in conversation and non-governmental publications.", "title": "Current use" }, { "paragraph_id": 42, "text": "Prior to metrication, it was a common practice in Malaysia for people to refer to unnamed locations and small settlements along major roads by referring to how many miles the said locations were from the nearest major town. In some cases, these eventually became the official names of the locations; in other cases, such names have been largely or completely superseded by new names. An example of the former is Batu 32 (literally \"Mile 32\" in Malay), which refers to the area surrounding the intersection between Federal Route 22 (the Tamparuli-Sandakan highway) and Federal Route 13 (the Sandakan-Tawau highway). The area is so named because it is 32 miles west of Sandakan, the nearest major town.", "title": "Current use" }, { "paragraph_id": 43, "text": "Petrol is still sold by the imperial gallon in Anguilla, Antigua and Barbuda, Belize, Myanmar, the Cayman Islands, Dominica, Grenada, Montserrat, St Kitts and Nevis and St. Vincent and the Grenadines. The United Arab Emirates Cabinet in 2009 issued the Decree No. (270 / 3) specifying that, from 1 January 2010, the new unit sale price for petrol will be the litre and not the gallon, which was in line with the UAE Cabinet Decision No. 31 of 2006 on the national system of measurement, which mandates the use of International System of units as a basis for the legal units of measurement in the country. Sierra Leone switched to selling fuel by the litre in May 2011.", "title": "Current use" }, { "paragraph_id": 44, "text": "In October 2011, the Antigua and Barbuda government announced the re-launch of the Metrication Programme in accordance with the Metrology Act 2007, which established the International System of Units as the legal system of units. The Antigua and Barbuda government has committed to a full conversion from the imperial system by the first quarter of 2015.", "title": "Current use" } ]
The imperial system of units, imperial system or imperial units is the system of units first defined in the British Weights and Measures Act 1824 and continued to be developed through a series of Weights and Measures Acts and amendments. The imperial system developed from earlier English units as did the related but differing system of customary units of the United States. The imperial units replaced the Winchester Standards, which were in effect from 1588 to 1825. The system came into official use across the British Empire in 1826. By the late 20th century, most nations of the former empire had officially adopted the metric system as their main system of measurement, but imperial units are still used alongside metric units in the United Kingdom and in some other parts of the former empire, notably Canada. The modern UK legislation defining the imperial system of units is given in the Weights and Measures Act 1985.
2002-02-25T15:51:15Z
2023-12-27T22:21:30Z
[ "Template:Cite news", "Template:Imperial units", "Template:Systems of measurement", "Template:About", "Template:Use dmy dates", "Template:Cite encyclopedia", "Template:Cslist", "Template:See also", "Template:Cite EB1911", "Template:Reflist", "Template:Overline", "Template:Columns-list", "Template:Cite web", "Template:Val", "Template:Convert", "Template:1/4", "Template:Citation needed", "Template:Anchor", "Template:Cn", "Template:Cvt", "Template:Cite book", "Template:Short description", "Template:Frac", "Template:As of", "Template:Gaps", "Template:Cite journal", "Template:UK SI", "Template:Webarchive", "Template:Fraction", "Template:1/2", "Template:Refn", "Template:Unreliable source?", "Template:Commons category", "Template:Nowrap", "Template:Main", "Template:Lang" ]
https://en.wikipedia.org/wiki/Imperial_units